1
|
Chen Q, Chen Y, Zou R, Shuai W, Guo Y, Wang J, Cao N. Chart2Vec: A Universal Embedding of Context-Aware Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:2167-2181. [PMID: 38551829 DOI: 10.1109/tvcg.2024.3383089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.
Collapse
|
2
|
Podo L, Prenkaj B, Velardi P. Agnostic Visual Recommendation Systems: Open Challenges and Future Directions. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1902-1917. [PMID: 38466597 DOI: 10.1109/tvcg.2024.3374571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as "agnostic" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This article summarizes the literature on agnostic VRSs and outlines promising future research directions.
Collapse
|
3
|
Guan Q, Cheng X, Xiao F, Li Z, He C, Fang L, Chen G, Gong Z, Luo W. Explainable exercise recommendation with knowledge graph. Neural Netw 2025; 183:106954. [PMID: 39667214 DOI: 10.1016/j.neunet.2024.106954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Revised: 10/18/2024] [Accepted: 11/21/2024] [Indexed: 12/14/2024]
Abstract
Recommending suitable exercises and providing the reasons for these recommendations is a highly valuable task, as it can significantly improve students' learning efficiency. Nevertheless, the extensive range of exercise resources and the diverse learning capacities of students present a notable difficulty in recommending exercises. Collaborative filtering approaches frequently have difficulties in recommending suitable exercises, whereas deep learning methods lack explanation, which restricts their practical use. To address these issue, this paper proposes KG4EER, an explainable exercise recommendation with a knowledge graph. KG4EER facilitates the matching of various students with suitable exercises and offers explanations for its recommendations. More precisely, a feature extraction module is introduced to represent students' learning features, and a knowledge graph is constructed to recommend exercises. This knowledge graph, which includes three primary entities - knowledge concepts, students, and exercises - and their interrelationships, serves to recommend suitable exercises. Extensive experiments conducted on three real-world datasets, coupled with expert interviews, establish the superiority of KG4EER over existing baseline methods and underscore its robust explainability.
Collapse
Affiliation(s)
- Quanlong Guan
- College of Information Science and Technology, Jinan University, Guangzhou, Guangdong, China; Guangdong Institution of Smart Education, Jinan University, Guangzhou, Guangdong, China
| | - Xinghe Cheng
- College of Information Science and Technology, Jinan University, Guangzhou, Guangdong, China; Guangdong Institution of Smart Education, Jinan University, Guangzhou, Guangdong, China.
| | - Fang Xiao
- College of Information Science and Technology, Jinan University, Guangzhou, Guangdong, China; Guangdong Institution of Smart Education, Jinan University, Guangzhou, Guangdong, China
| | - Zhuzhou Li
- College of Information Science and Technology, Jinan University, Guangzhou, Guangdong, China; Guangdong Institution of Smart Education, Jinan University, Guangzhou, Guangdong, China
| | - Chaobo He
- South China Normal University, Guangzhou, Guangdong, China
| | - Liangda Fang
- College of Information Science and Technology, Jinan University, Guangzhou, Guangdong, China; Pazhou Lab, Guangzhou, Guangdong, China
| | - Guanliang Chen
- Faculty of Information Technology, Monash University, Melbourne, Victoria, Australia.
| | - Zhiguo Gong
- Department of Computer and Information Science, University of Macau, Macao Special Administrative Region of China
| | - Weiqi Luo
- Guangdong Institution of Smart Education, Jinan University, Guangzhou, Guangdong, China
| |
Collapse
|
4
|
Tian Y, Cui W, Deng D, Yi X, Yang Y, Zhang H, Wu Y. ChartGPT: Leveraging LLMs to Generate Charts From Abstract Natural Language. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1731-1745. [PMID: 38386583 DOI: 10.1109/tvcg.2024.3368621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2024]
Abstract
The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.
Collapse
|
5
|
Wang HW, Gordon M, Battle L, Heer J. DracoGPT: Extracting Visualization Design Preferences from Large Language Models. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:710-720. [PMID: 39283801 DOI: 10.1109/tvcg.2024.3456350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, a method for extracting, modeling, and assessing visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines-DracoGPT-Rank and DracoGPT-Recommend-to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT can accurately model the preferences expressed by LLMs, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantially diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and to provide a robust and cost-effective stand-in for LLMs.
Collapse
|
6
|
Zeng X, Lin H, Ye Y, Zeng W. Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:525-535. [PMID: 39255172 DOI: 10.1109/tvcg.2024.3456159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (i.e., charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research. Source codes and datasets of this paper are available at https://github.com/zengxingchen/ChartQA-MLLM.
Collapse
|
7
|
Ma J, Li K, Zhang F, Wang Y, Luo X, Li C, Qiao Y. BGAT-CCRF: A novel end-to-end model for knowledge graph noise correction. Neural Netw 2024; 180:106715. [PMID: 39276587 DOI: 10.1016/j.neunet.2024.106715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 08/19/2024] [Accepted: 09/06/2024] [Indexed: 09/17/2024]
Abstract
Knowledge graph (KG) noise correction aims to select suitable candidates to correct the noises in KGs. Most of the existing studies have limited performance in repairing the noisy triple that contains more than one incorrect entity or relation, which significantly constrains their implementation in real-world KGs. To overcome this challenge, we propose a novel end-to-end model (BGAT-CCRF) that achieves better noise correction results. Specifically, we construct a balanced-based graph attention model (BGAT) to learn the features of nodes in triples' neighborhoods and capture the correlation between nodes based on their position and frequency. Additionally, we design a constrained conditional random field model (CCRF) to select suitable candidates guided by three constraints for correcting one or more noises in the triple. In this way, BGAT-CCRF can select multiple candidates from a smaller domain to repair multiple noises in triples simultaneously, rather than selecting candidates from the whole KG to repair noisy triples as traditional methods do, which can only repair one noise in the triple at a time. The effectiveness of BGAT-CCRF is validated by the KG noise correction experiment. Compared with the state-of-the-art models, BGAT-CCRF improves the fMRR metric by 3.58% on the FB15K dataset. Hence, it has the potential to facilitate the implementation of KGs in the real world.
Collapse
Affiliation(s)
- Jiangtao Ma
- College of Computer and Information Engineering, Tianjin Normal University, Tianjin, 300387, China; College of Computer Science and Technology, Zhengzhou University of Light Industry, Zhengzhou, 450000, China
| | - Kunlin Li
- College of Computer Science and Technology, Zhengzhou University of Light Industry, Zhengzhou, 450000, China
| | - Fan Zhang
- China National Digital Switching System Engineering and Technology R&D Center, Zhengzhou, 450001, China
| | - Yanjun Wang
- College of Computer Science and Technology, Zhengzhou University of Light Industry, Zhengzhou, 450000, China
| | - Xiangyang Luo
- State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, 450001, China
| | - Chenliang Li
- School of Cyber Science and Engineering, Wuhan University, Wuhan, 430079, China
| | - Yaqiong Qiao
- College of Cyber Science, Nankai University, Tianjin, 300350, China.
| |
Collapse
|
8
|
Huang C, Yu F, Wan Z, Li F, Ji H, Li Y. Knowledge graph confidence-aware embedding for recommendation. Neural Netw 2024; 180:106601. [PMID: 39321562 DOI: 10.1016/j.neunet.2024.106601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 06/06/2024] [Accepted: 08/04/2024] [Indexed: 09/27/2024]
Abstract
Knowledge graphs (KG) are vital for extracting and storing knowledge from large datasets. Current research favors knowledge graph-based recommendation methods, but they often overlook the features learning of relations between entities and focus excessively on entity-level details. Moreover, they ignore a crucial fact: the aggregation process of entity and relation features in KG is complex, diverse, and imbalanced. To address this, we propose a recommendation-oriented KG confidence-aware embedding technique. It introduces an information aggregation graph and a confidence feature aggregation mechanism to overcome these challenges. Additionally, we quantify entity confidence at the feature and category levels, improving the precision of embeddings during information propagation and aggregation. Our approach achieves significant improvements over state-of-the-art KG embedding-based recommendation methods, with up to 6.20% increase in AUC and 8.46% increase in GAUC, as demonstrated on four public KG datasets2.
Collapse
Affiliation(s)
| | - Fei Yu
- Zhejiang Lab, Hangzhou, 311121, China.
| | | | - Fengying Li
- Harbin University of Science and Technology, Harbin, 150006, China.
| | - Hui Ji
- Jiangsu University, Zhenjiang, 212013, China.
| | - Yuandi Li
- Jiangsu University, Zhenjiang, 212013, China.
| |
Collapse
|
9
|
Zhang X, Guo J. A feature-enhanced knowledge graph neural network for machine learning method recommendation. PeerJ Comput Sci 2024; 10:e2284. [PMID: 39314730 PMCID: PMC11419609 DOI: 10.7717/peerj-cs.2284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 08/05/2024] [Indexed: 09/25/2024]
Abstract
Large amounts of machine learning methods with condensed names bring great challenges for researchers to select a suitable approach for a target dataset in the area of academic research. Although the graph neural networks based on the knowledge graph have been proven helpful in recommending a machine learning method for a given dataset, the issues of inadequate entity representation and over-smoothing of embeddings still need to be addressed. This article proposes a recommendation framework that integrates the feature-enhanced graph neural network and an anti-smoothing aggregation network. In the proposed framework, in addition to utilizing the textual description information of the target entities, each node is enhanced through its neighborhood information before participating in the higher-order propagation process. In addition, an anti-smoothing aggregation network is designed to reduce the influence of central nodes in each information aggregation by an exponential decay function. Extensive experiments on the public dataset demonstrate that the proposed approach exhibits substantial advantages over the strong baselines in recommendation tasks.
Collapse
Affiliation(s)
- Xin Zhang
- School of Artificial Intelligence and Big data, Hefei University, Hefei, China
| | - Junjie Guo
- School of Artificial Intelligence and Big data, Hefei University, Hefei, China
| |
Collapse
|
10
|
Chen Y, Wu C, Zhang Q, Wu D. Review of visual analytics methods for food safety risks. NPJ Sci Food 2023; 7:49. [PMID: 37699926 PMCID: PMC10497676 DOI: 10.1038/s41538-023-00226-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 08/31/2023] [Indexed: 09/14/2023] Open
Abstract
With the availability of big data for food safety, more and more advanced data analysis methods are being applied to risk analysis and prewarning (RAPW). Visual analytics, which has emerged in recent years, integrates human and machine intelligence into the data analysis process in a visually interactive manner, helping researchers gain insights into large-scale data and providing new solutions for RAPW. This review presents the developments in visual analytics for food safety RAPW in the past decade. Firstly, the data sources, data characteristics, and analysis tasks in the food safety field are summarized. Then, data analysis methods for four types of analysis tasks: association analysis, risk assessment, risk prediction, and fraud identification, are reviewed. After that, the visualization and interaction techniques are reviewed for four types of characteristic data: multidimensional, hierarchical, associative, and spatial-temporal data. Finally, opportunities and challenges in this area are proposed, such as the visual analysis of multimodal food safety data, the application of artificial intelligence techniques in the visual analysis pipeline, etc.
Collapse
Affiliation(s)
- Yi Chen
- Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing, 100048, China.
| | - Caixia Wu
- Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing, 100048, China
| | - Qinghui Zhang
- Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing, 100048, China
| | - Di Wu
- National Measurement Laboratory: Centre of Excellence in Agriculture and Food Integrity, Institute for Global Food Security, School of Biological Sciences, Queen's University Belfast, Belfast, Northern Ireland, UK
| |
Collapse
|
11
|
Xu T, Ma Y, Pan T, Chen Y, Liu Y, Zhu F, Zhou Z, Chen Q. Visual Analytics of Multidimensional Oral Health Surveys: Data Mining Study. JMIR Med Inform 2023; 11:e46275. [PMID: 37526971 PMCID: PMC10427931 DOI: 10.2196/46275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 05/28/2023] [Accepted: 05/31/2023] [Indexed: 08/02/2023] Open
Abstract
BACKGROUND Oral health surveys largely facilitate the prevention and treatment of oral diseases as well as the awareness of population health status. As oral health is always surveyed from a variety of perspectives, it is a difficult and complicated task to gain insights from multidimensional oral health surveys. OBJECTIVE We aimed to develop a visualization framework for the visual analytics and deep mining of multidimensional oral health surveys. METHODS First, diseases and groups were embedded into data portraits based on their multidimensional attributes. Subsequently, group classification and correlation pattern extraction were conducted to explore the correlation features among diseases, behaviors, symptoms, and cognitions. On the basis of the feature mining of diseases, groups, behaviors, and their attributes, a knowledge graph was constructed to reveal semantic information, integrate the graph query function, and describe the features of intrigue to users. RESULTS A visualization framework was implemented for the exploration of multidimensional oral health surveys. A series of user-friendly interactions were integrated to propose a visual analysis system that can help users further achieve the regulations of oral health conditions. CONCLUSIONS A visualization framework is provided in this paper with a set of meaningful user interactions integrated, enabling users to intuitively understand the oral health situation and conduct in-depth data exploration and analysis. Case studies based on real-world data sets demonstrate the effectiveness of our system in the exploration of oral diseases.
Collapse
Affiliation(s)
- Ting Xu
- Department of Stomatology, First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yuming Ma
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Tianya Pan
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Yifei Chen
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Yuhua Liu
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Fudong Zhu
- The Affiliated Stomatology Hospital Zhejiang University, Hangzhou, China
| | - Zhiguang Zhou
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Qianming Chen
- The Affiliated Stomatology Hospital Zhejiang University, Hangzhou, China
| |
Collapse
|
12
|
Li J, Zhou CQ. Incorporation of Human Knowledge into Data Embeddings to Improve Pattern Significance and Interpretability. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:723-733. [PMID: 36155441 DOI: 10.1109/tvcg.2022.3209382] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Embedding is a common technique for analyzing multi-dimensional data. However, the embedding projection cannot always form significant and interpretable visual structures that foreshadow underlying data patterns. We propose an approach that incorporates human knowledge into data embeddings to improve pattern significance and interpretability. The core idea is (1) externalizing tacit human knowledge as explicit sample labels and (2) adding a classification loss in the embedding network to encode samples' classes. The approach pulls samples of the same class with similar data features closer in the projection, leading to more compact (significant) and class-consistent (interpretable) visual structures. We give an embedding network with a customized classification loss to implement the idea and integrate the network into a visualization system to form a workflow that supports flexible class creation and pattern exploration. Patterns found on open datasets in case studies, subjects' performance in a user study, and quantitative experiment results illustrate the general usability and effectiveness of the approach.
Collapse
|
13
|
Deng D, Wu A, Qu H, Wu Y. DashBot: Insight-Driven Dashboard Generation Based on Deep Reinforcement Learning. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:690-700. [PMID: 36179003 DOI: 10.1109/tvcg.2022.3209468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Analytical dashboards are popular in business intelligence to facilitate insight discovery with multiple charts. However, creating an effective dashboard is highly demanding, which requires users to have adequate data analysis background and be familiar with professional tools, such as Power BI. To create a dashboard, users have to configure charts by selecting data columns and exploring different chart combinations to optimize the communication of insights, which is trial-and-error. Recent research has started to use deep learning methods for dashboard generation to lower the burden of visualization creation. However, such efforts are greatly hindered by the lack of large-scale and high-quality datasets of dashboards. In this work, we propose using deep reinforcement learning to generate analytical dashboards that can use well-established visualization knowledge and the estimation capacity of reinforcement learning. Specifically, we use visualization knowledge to construct a training environment and rewards for agents to explore and imitate human exploration behavior with a well-designed agent network. The usefulness of the deep reinforcement learning model is demonstrated through ablation studies and user studies. In conclusion, our work opens up new opportunities to develop effective ML-based visualization recommenders without beforehand training datasets.
Collapse
|
14
|
Huang Y, Yu S, Chu J, Su Z, Zhu Y, Wang H, Wang M, Fan H. Design knowledge graph-aided conceptual product design approach based on joint entity and relation extraction. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-223100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Design knowledge is critical to creating ideas in the conceptual design stage of product development for innovation. Fragmentary design data, massive multidisciplinary knowledge call for the development of a novel knowledge acquisition approach for conceptual product design. This study proposes a Design Knowledge Graph-aided (DKG-aided) conceptual product design approach for knowledge acquisition and design process improvement. The DKG framework uses a deep-learning algorithm to discover design-related knowledge from massive fragmentary data and constructs a knowledge graph for conceptual product design. The joint entity and relation extraction model is proposed to automatically extract design knowledge from massive unstructured data. The feasibility and high accuracy of the proposed design knowledge extraction model were demonstrated with experimental comparisons and the validation of the DKG in the case study of conceptual product design inspired by massive real data of porcelain.
Collapse
Affiliation(s)
- Yuexin Huang
- Key Laboratory of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi’an, China
- School of Industrial Design Engineering, Delft University of Technology, Delft, The Netherlands
| | - Suihuai Yu
- Key Laboratory of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi’an, China
| | - Jianjie Chu
- Key Laboratory of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi’an, China
| | - Zhaojing Su
- Department of Industrial Design, College of Arts, Shandong University of Science and Technology, Tsingtao China
| | - Yaokang Zhu
- School of Computer Science and Technology, East China Normal University, Dongchuan Rd., Shanghai, China
| | - Hanyu Wang
- Key Laboratory of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi’an, China
| | - Mengcheng Wang
- Key Laboratory of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi’an, China
| | - Hao Fan
- Key Laboratory of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi’an, China
| |
Collapse
|
15
|
Semantic Data Visualisation for Biomedical Database Catalogues. Healthcare (Basel) 2022; 10:healthcare10112287. [DOI: 10.3390/healthcare10112287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 11/08/2022] [Accepted: 11/10/2022] [Indexed: 11/16/2022] Open
Abstract
Biomedical databases often have restricted access policies and governance rules. Thus, an adequate description of their content is essential for researchers who wish to use them for medical research. A strategy for publishing information without disclosing patient-level data is through database fingerprinting and aggregate characterisations. However, this information is still presented in a format that makes it challenging to search, analyse, and decide on the best databases for a domain of study. Several strategies allow one to visualise and compare the characteristics of multiple biomedical databases. Our study focused on a European platform for sharing and disseminating biomedical data. We use semantic data visualisation techniques to assist in comparing descriptive metadata from several databases. The great advantage lies in streamlining the database selection process, ensuring that sensitive details are not shared. To address this goal, we have considered two levels of data visualisation, one characterising a single database and the other involving multiple databases in network-level visualisations. This study revealed the impact of the proposed visualisations and some open challenges in representing semantically annotated biomedical datasets. Identifying future directions in this scope was one of the outcomes of this work.
Collapse
|
16
|
Deagen ME, McCusker JP, Fateye T, Stouffer S, Brinson LC, McGuinness DL, Schadler LS. FAIR and Interactive Data Graphics from a Scientific Knowledge Graph. Sci Data 2022; 9:239. [PMID: 35624233 PMCID: PMC9142568 DOI: 10.1038/s41597-022-01352-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 04/26/2022] [Indexed: 11/16/2022] Open
Abstract
Graph databases capture richly linked domain knowledge by integrating heterogeneous data and metadata into a unified representation. Here, we present the use of bespoke, interactive data graphics (bar charts, scatter plots, etc.) for visual exploration of a knowledge graph. By modeling a chart as a set of metadata that describes semantic context (SPARQL query) separately from visual context (Vega-Lite specification), we leverage the high-level, declarative nature of the SPARQL and Vega-Lite grammars to concisely specify web-based, interactive data graphics synchronized to a knowledge graph. Resources with dereferenceable URIs (uniform resource identifiers) can employ the hyperlink encoding channel or image marks in Vega-Lite to amplify the information content of a given data graphic, and published charts populate a browsable gallery of the database. We discuss design considerations that arise in relation to portability, persistence, and performance. Altogether, this pairing of SPARQL and Vega-Lite-demonstrated here in the domain of polymer nanocomposite materials science-offers an extensible approach to FAIR (findable, accessible, interoperable, reusable) scientific data visualization within a knowledge graph framework.
Collapse
Affiliation(s)
- Michael E Deagen
- Department of Mechanical Engineering, University of Vermont, Burlington, VT, USA.
| | - Jamie P McCusker
- Tetherless World Constellation, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Tolulomo Fateye
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC, USA
| | - Samuel Stouffer
- Tetherless World Constellation, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - L Cate Brinson
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC, USA
| | | | - Linda S Schadler
- Department of Mechanical Engineering, University of Vermont, Burlington, VT, USA
| |
Collapse
|