1
|
Qin Y, Wu X, Yu T, Jiang S. Enhancing student-centered walking environments on university campuses through street view imagery and machine learning. PLoS One 2025; 20:e0321028. [PMID: 40203019 PMCID: PMC11981197 DOI: 10.1371/journal.pone.0321028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 02/27/2025] [Indexed: 04/11/2025] Open
Abstract
Campus walking environments significantly influence college students' daily lives and shape their subjective perceptions. However, previous studies have been constrained by limited sample sizes and inefficient, time-consuming methodologies. To address these limitations, we developed a deep learning framework to evaluate campus walking perceptions across four universities in China's Yangtze River Delta region. Utilizing 15,596 Baidu Street View Images (BSVIs), and perceptual ratings from 100 volunteers across four dimensions-aesthetics, security, depression, and vitality-we employed four machine learning models to predict perceptual scores. Our results demonstrate that the Random Forest (RF) model outperformed others in predicting aesthetics, security, and vitality, while linear regression was most effective for depression. Spatial analysis revealed that perceptions of aesthetics, security, and vitality were concentrated in landmark areas and regions with high pedestrian flow. Multiple linear regression analysis indicated that buildings exhibited stronger correlations with depression (β = 0.112) compared to other perceptual aspects. Moreover, vegetation (β = 0.032) and meadows (β = 0.176) elements significantly enhanced aesthetics. This study offers actionable insights for optimizing campus walking environments from a student-centered perspective, emphasizing the importance of spatial design and visual elements in enhancing students' perceptual experiences.
Collapse
Affiliation(s)
- Yi Qin
- Department of Art and Craft, Xi’an Academy of Fine Arts, Xi’an, China
| | - Xue Wu
- Department of Arts, School of Humanities and Social Sciences, Xi’an Jiaotong University, Xi’an, China
| | - Tengfei Yu
- School of Arts, Chongqing University, Chongqing, China
| | - Shuai Jiang
- College of Architecture and Landscape, Peking University, Beijing, China
| |
Collapse
|
2
|
Li W, Sun R, He H, Yan M, Chen L. Perceptible landscape patterns reveal invisible socioeconomic profiles of cities. Sci Bull (Beijing) 2024; 69:3291-3302. [PMID: 38969538 DOI: 10.1016/j.scib.2024.06.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 04/29/2024] [Accepted: 04/30/2024] [Indexed: 07/07/2024]
Abstract
Urban landscape is directly perceived by residents and is a significant symbol of urbanization development. A comprehensive assessment of urban landscapes is crucial for guiding the development of inclusive, resilient, and sustainable cities and human settlements. Previous studies have primarily analyzed two-dimensional landscape indicators derived from satellite remote sensing, potentially overlooking the valuable insights provided by the three-dimensional configuration of landscapes. This limitation arises from the high cost of acquiring large-area three-dimensional data and the lack of effective assessment indicators. Here, we propose four urban landscapes indicators in three dimensions (UL3D): greenness, grayness, openness, and crowding. We construct the UL3D using 4.03 million street view images from 303 major cities in China, employing a deep learning approach. We combine urban background and two-dimensional urban landscape indicators with UL3D to predict the socioeconomic profiles of cities. The results show that UL3D indicators differs from two-dimensional landscape indicators, with a low average correlation coefficient of 0.31 between them. Urban landscapes had a changing point in 2018-2019 due to new urbanization initiatives, with grayness and crowding rates slowing, while openness increased. The incorporation of UL3D indicators significantly enhances the explanatory power of the regression model for predicting socioeconomic profiles. Specifically, GDP per capita, urban population rate, built-up area per capita, and hospital count correspond to improvements of 25.0%, 19.8%, 35.5%, and 19.2%, respectively. These findings indicate that UL3D indicators have the potential to reflect the socioeconomic profiles of cities.
Collapse
Affiliation(s)
- Wenning Li
- State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing 100085, China
| | - Ranhao Sun
- State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing 100085, China; University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Hongbin He
- State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing 100085, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ming Yan
- State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing 100085, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Liding Chen
- State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing 100085, China; University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
3
|
Yang B, Tang H, Wang X, Liang X, Qin H, Hu H. Data-Driven Insights Into Urban Intersections: Visual Analytics of High-Value Scene. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2024; 44:30-42. [PMID: 38648158 DOI: 10.1109/mcg.2024.3392417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
In this article, we propose TraVis, an interactive system that allows users to explore and analyze complex traffic trajectory data at urban intersections. Trajectory data contain a large amount of spatio-temporal information, and while previous studies have mainly focused on the macroscopic aspects of traffic flow, TraVis employs visualization methods to investigate and analyze microscopic traffic events (i.e., high-value scenes in trajectory data). TraVis contains a novel view design and provides multiple interaction modalities to offer users the most intuitive insights into high-value scenes. With this system, users can gain a better understanding of urban intersection traffic information, identify different types of high-value scenes, explore the reasons behind their occurrence, and gain deeper insights into urban intersection traffic. Through two case studies, we illustrate how to use the system and validate its effectiveness.
Collapse
|
4
|
Amiruzzaman M, Zhao Y, Amiruzzaman S, Karpinski AC, Wu TH. An AI-based framework for studying visual diversity of urban neighborhoods and its relationship with socio-demographic variables. JOURNAL OF COMPUTATIONAL SOCIAL SCIENCE 2022; 6:315-337. [PMID: 36593882 PMCID: PMC9795947 DOI: 10.1007/s42001-022-00197-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/29/2022] [Indexed: 05/05/2023]
Abstract
This study presents a framework to study quantitatively geographical visual diversities of urban neighborhood from a large collection of street-view images using an Artificial Intelligence (AI)-based image segmentation technique. A variety of diversity indices are computed from the extracted visual semantics. They are utilized to discover the relationships between urban visual appearance and socio-demographic variables. This study also validates the reliability of the method with human evaluators. The methodology and results obtained from this study can potentially be used to study urban features, locate houses, establish services, and better operate municipalities.
Collapse
Affiliation(s)
- Md Amiruzzaman
- Department of Computer Science, West Chester University, West Chester, PA USA
| | - Ye Zhao
- Department of Computer Science, Kent State University, Kent, OH USA
| | - Stefanie Amiruzzaman
- Department of Languages and Cultures, West Chester University, West Chester, PA USA
| | - Aryn C. Karpinski
- Research, Measurement & Statistics, Kent State University, Kent, OH USA
| | - Tsung Heng Wu
- Department of Computer Science, Kent State University, Kent, OH USA
| |
Collapse
|
5
|
Deng Z, Weng D, Liu S, Tian Y, Xu M, Wu Y. A survey of urban visual analytics: Advances and future directions. COMPUTATIONAL VISUAL MEDIA 2022; 9:3-39. [PMID: 36277276 PMCID: PMC9579670 DOI: 10.1007/s41095-022-0275-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 02/08/2022] [Indexed: 06/16/2023]
Abstract
Developing effective visual analytics systems demands care in characterization of domain problems and integration of visualization techniques and computational models. Urban visual analytics has already achieved remarkable success in tackling urban problems and providing fundamental services for smart cities. To promote further academic research and assist the development of industrial urban analytics systems, we comprehensively review urban visual analytics studies from four perspectives. In particular, we identify 8 urban domains and 22 types of popular visualization, analyze 7 types of computational method, and categorize existing systems into 4 types based on their integration of visualization techniques and computational models. We conclude with potential research directions and opportunities.
Collapse
Affiliation(s)
- Zikun Deng
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Di Weng
- Microsoft Research Asia, Beijing, 100080 China
| | - Shuhan Liu
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Yuan Tian
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Mingliang Xu
- School of Information Engineering, Zhengzhou University, Zhengzhou, China
- Henan Institute of Advanced Technology, Zhengzhou University, Zhengzhou, 450001 China
| | - Yingcai Wu
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| |
Collapse
|
6
|
de Mesquita RG, Ren TI, Mello CAB, Silva MLPC. Street pavement classification based on navigation through street view imagery. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01520-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
7
|
Deng Z, Weng D, Liang Y, Bao J, Zheng Y, Schreck T, Xu M, Wu Y. Visual Cascade Analytics of Large-Scale Spatiotemporal Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2486-2499. [PMID: 33822726 DOI: 10.1109/tvcg.2021.3071387] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Many spatiotemporal events can be viewed as contagions. These events implicitly propagate across space and time by following cascading patterns, expanding their influence, and generating event cascades that involve multiple locations. Analyzing such cascading processes presents valuable implications in various urban applications, such as traffic planning and pollution diagnostics. Motivated by the limited capability of the existing approaches in mining and interpreting cascading patterns, we propose a visual analytics system called VisCas. VisCas combines an inference model with interactive visualizations and empowers analysts to infer and interpret the latent cascading patterns in the spatiotemporal context. To develop VisCas, we address three major challenges 1) generalized pattern inference; 2) implicit influence visualization; and 3) multifaceted cascade analysis. For the first challenge, we adapt the state-of-the-art cascading network inference technique to general urban scenarios, where cascading patterns can be reliably inferred from large-scale spatiotemporal data. For the second and third challenges, we assemble a set of effective visualizations to support location navigation, influence inspection, and cascading exploration, and facilitate the in-depth cascade analysis. We design a novel influence view based on a three-fold optimization strategy for analyzing the implicit influences of the inferred patterns. We demonstrate the capability and effectiveness of VisCas with two case studies conducted on real-world traffic congestion and air pollution datasets with domain experts.
Collapse
|
8
|
Jamonnak S, Bhati D, Amiruzzaman M, Zhao Y, Ye X, Curtis A. VisualCommunity: a platform for archiving and studying communities. JOURNAL OF COMPUTATIONAL SOCIAL SCIENCE 2022; 5:1257-1279. [PMID: 35602668 PMCID: PMC9109455 DOI: 10.1007/s42001-022-00170-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 04/21/2022] [Indexed: 06/15/2023]
Abstract
VisualCommunity is a platform designed to support community or neighborhood scale research. The platform integrates mobile, AI, visualization techniques, along with tools to help domain researchers, practitioners, and students collecting and working with spatialized video and geo-narratives. These data, which provide granular spatialized imagery and associated context gained through expert commentary have previously provided value in understanding various community-scale challenges. This paper further enhances this work AI-based image processing and speech transcription tools available in VisualCommunity, allowing for the easy exploration of the acquired semantic and visual information about the area under investigation. In this paper we describe the specific advances through use case examples including COVID-19 related scenarios.
Collapse
Affiliation(s)
| | - Deepshikha Bhati
- Department of Computer Science, Kent State University, Kent, OH USA
| | - Md Amiruzzaman
- Department of Computer Science, West Chester University, West Chester, PA USA
| | - Ye Zhao
- Department of Computer Science, Kent State University, Kent, OH USA
| | - Xinyue Ye
- Department of Landscape Architecture and Urban Planning, Texas A&M University, College Station, TX USA
| | - Andrew Curtis
- Department of Population and Quantitative Health Sciences, Case Western Reserve University, Cleveland, OH USA
| |
Collapse
|
9
|
Jamonnak S, Zhao Y, Huang X, Amiruzzaman M. Geo-Context Aware Study of Vision-Based Autonomous Driving Models and Spatial Video Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1019-1029. [PMID: 34596546 DOI: 10.1109/tvcg.2021.3114853] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Vision-based deep learning (DL) methods have made great progress in learning autonomous driving models from large-scale crowd-sourced video datasets. They are trained to predict instantaneous driving behaviors from video data captured by on-vehicle cameras. In this paper, we develop a geo-context aware visualization system for the study of Autonomous Driving Model (ADM) predictions together with large-scale ADM video data. The visual study is seamlessly integrated with the geographical environment by combining DL model performance with geospatial visualization techniques. Model performance measures can be studied together with a set of geospatial attributes over map views. Users can also discover and compare prediction behaviors of multiple DL models in both city-wide and street-level analysis, together with road images and video contents. Therefore, the system provides a new visual exploration platform for DL model designers in autonomous driving. Use cases and domain expert evaluation show the utility and effectiveness of the visualization system.
Collapse
|
10
|
Shao L, Chu Z, Chen X, Lin Y, Zeng W. Modeling layout design for multiple-view visualization via Bayesian inference. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00781-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
11
|
Near Real-Time Semantic View Analysis of 3D City Models in Web Browser. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2021. [DOI: 10.3390/ijgi10030138] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
3D city models and their browser-based applications have become an increasingly applied tool in the cities. One of their applications is the analysis views and visibility, applicable to property valuation and evaluation of urban green infrastructure. We present a near real-time semantic view analysis relying on a 3D city model, implemented in a web browser. The analysis is tested in two alternative use cases: property valuation and evaluation of the urban green infrastructure. The results describe the elements visible from a given location, and can also be applied to object type specific analysis, such as green view index estimation, with the main benefit being the freedom of choosing the point-of-view obtained with the 3D model. Several promising development directions can be identified based on the current implementation and experiment results, including the integration of the semantic view analysis with virtual reality immersive visualization or 3D city model application development platforms.
Collapse
|
12
|
Spatial information and the legibility of urban form: Big data in urban morphology. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2021. [DOI: 10.1016/j.ijinfomgt.2019.09.009] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
13
|
Zeng W, Lin C, Lin J, Jiang J, Xia J, Turkay C, Chen W. Revisiting the Modifiable Areal Unit Problem in Deep Traffic Prediction with Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:839-848. [PMID: 33074818 DOI: 10.1109/tvcg.2020.3030410] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Deep learning methods are being increasingly used for urban traffic prediction where spatiotemporal traffic data is aggregated into sequentially organized matrices that are then fed into convolution-based residual neural networks. However, the widely known modifiable areal unit problem within such aggregation processes can lead to perturbations in the network inputs. This issue can significantly destabilize the feature embeddings and the predictions - rendering deep networks much less useful for the experts. This paper approaches this challenge by leveraging unit visualization techniques that enable the investigation of many-to-many relationships between dynamically varied multi-scalar aggregations of urban traffic data and neural network predictions. Through regular exchanges with a domain expert, we design and develop a visual analytics solution that integrates 1) a Bivariate Map equipped with an advanced bivariate colormap to simultaneously depict input traffic and prediction errors across space, 2) a Moran's I Scatterplot that provides local indicators of spatial association analysis, and 3) a Multi-scale Attribution View that arranges non-linear dot plots in a tree layout to promote model analysis and comparison across scales. We evaluate our approach through a series of case studies involving a real-world dataset of Shenzhen taxi trips, and through interviews with domain experts. We observe that geographical scale variations have important impact on prediction performances, and interactive visual exploration of dynamically varying inputs and outputs benefit experts in the development of deep traffic prediction models.
Collapse
|
14
|
Amiruzzaman M, Curtis A, Zhao Y, Jamonnak S, Ye X. Classifying crime places by neighborhood visual appearance and police geonarratives: a machine learning approach. JOURNAL OF COMPUTATIONAL SOCIAL SCIENCE 2021; 4:813-837. [PMID: 33718652 PMCID: PMC7938887 DOI: 10.1007/s42001-021-00107-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Accepted: 02/23/2021] [Indexed: 05/21/2023]
Abstract
The complex interrelationship between the built environment and social problems is often described but frequently lacks the data and analytical framework to explore the potential of such a relationship in different applications. We address this gap using a machine learning (ML) approach to study whether street-level built environment visuals can be used to classify locations with high-crime and lower-crime activities. For training the ML model, spatialized expert narratives are used to label different locations. Semantic categories (e.g., road, sky, greenery, etc.) are extracted from Google Street View (GSV) images of those locations through a deep learning image segmentation algorithm. From these, local visual representatives are generated and used to train the classification model. The model is applied to two cities in the U.S. to predict the locations as being linked to high crime. Results show our model can predict high- and lower-crime areas with high accuracies (above 98% and 95% in first and second test cities, accordingly).
Collapse
Affiliation(s)
| | | | - Ye Zhao
- Kent State University, Kent, USA
| | | | - Xinyue Ye
- Texas A & M University, College Station, USA
| |
Collapse
|
15
|
Wu M, Zeng W, Fu CW. FloorLevel-Net: Recognizing Floor-Level Lines With Height-Attention-Guided Multi-Task Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:6686-6699. [PMID: 34310282 DOI: 10.1109/tip.2021.3096090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The ability to recognize the position and order of the floor-level lines that divide adjacent building floors can benefit many applications, for example, urban augmented reality (AR). This work tackles the problem of locating floor-level lines in street-view images, using a supervised deep learning approach. Unfortunately, very little data is available for training such a network - current street-view datasets contain either semantic annotations that lack geometric attributes, or rectified facades without perspective priors. To address this issue, we first compile a new dataset and develop a new data augmentation scheme to synthesize training samples by harassing (i) the rich semantics of existing rectified facades and (ii) perspective priors of buildings in diverse street views. Next, we design FloorLevel-Net, a multi-task learning network that associates explicit features of building facades and implicit floor-level lines, along with a height-attention mechanism to help enforce a vertical ordering of floor-level lines. The generated segmentations are then passed to a second-stage geometry post-processing to exploit self-constrained geometric priors for plausible and consistent reconstruction of floor-level lines. Quantitative and qualitative evaluations conducted on assorted facades in existing datasets and street views from Google demonstrate the effectiveness of our approach. Also, we present context-aware image overlay results and show the potentials of our approach in enriching AR-related applications. Project website: https://wumengyangok.github.io/Project/FloorLevelNet.
Collapse
|
16
|
Quantifying the Urban Visual Perception of Chinese Traditional-Style Building with Street View Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10175963] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
As a symbol of Chinese culture, Chinese traditional-style architecture defines the unique characteristics of Chinese cities. The visual qualities and spatial distribution of architecture represent the image of a city, which affects the psychological states of the residents and can induce positive or negative social outcomes. Hence, it is important to study the visual perception of Chinese traditional-style buildings in China. Previous works have been restricted by the lack of data sources and techniques, which were not quantitative and comprehensive. In this paper, we proposed a deep learning model for automatically predicting the presence of Chinese traditional-style buildings and developed two view indicators to quantify the pedestrians’ visual perceptions of buildings. Using this model, Chinese traditional-style buildings were automatically segmented in streetscape images within the Fifth Ring Road of Beijing and then the perception of Chinese traditional-style buildings was quantified with two view indictors. This model can also help to automatically predict the perception of Chinese traditional-style buildings for new urban regions in China, and more importantly, the two view indicators provide a new quantitative method for measuring the urban visual perception in street level, which is of great significance for the quantitative research of tourism route and urban planning.
Collapse
|
17
|
Zeng W, Dong A, Chen X, Cheng ZL. VIStory: interactive storyboard for exploring visual information in scientific publications. J Vis (Tokyo) 2020; 24:69-84. [PMID: 32837222 PMCID: PMC7429144 DOI: 10.1007/s12650-020-00688-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 05/20/2020] [Accepted: 07/01/2020] [Indexed: 11/29/2022]
Abstract
Abstract Many visual analytics have been developed for examining scientific publications comprising wealthy data such as authors and citations. The studies provide unprecedented insights on a variety of applications, e.g., literature review and collaboration analysis. However, visual information (e.g., figures) that is widely employed for storytelling and methods description are often neglected. We present VIStory, an interactive storyboard for exploring visual information in scientific publications. We harvest a new dataset of a large corpora of figures, using an automatic figure extraction method. Each figure contains various attributes such as dominant color and width/height ratio, together with faceted metadata of the publication including venues, authors, and keywords. To depict these information, we develop an intuitive interface consisting of three components: (1) Faceted View enables efficient query by publication metadata, benefiting from a nested table structure, (2) Storyboard View arranges paper rings—a well-designed glyph for depicting figure attributes, in a themeriver layout to reveal temporal trends, and (3) Endgame View presents a highlighted figure together with the publication metadata. We illustrate the applicability of VIStory with case studies on two datasets, i.e., 10-year IEEE VIS publications, and publications by a research team at CVPR, ICCV, and ECCV conferences. Quantitative and qualitative results from a formal user study demonstrate the efficiency of VIStory in exploring visual information in scientific publications. Graphical abstract ![]()
Electronic supplementary material The online version of this article (10.1007/s12650-020-00688-1) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Wei Zeng
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ao Dong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Xi Chen
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Zhang-Lin Cheng
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
18
|
Quantitative Landscape Assessment Using LiDAR and Rendered 360° Panoramic Images. REMOTE SENSING 2020. [DOI: 10.3390/rs12030386] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The study presents a new method for quantitative landscape assessment. The method uses LiDAR data and combines the potential of GIS (ArcGIS) and 3D graphics software (Blender). The developed method allows one to create Classified Digital Surface Models (CDSM), which are then used to create 360° panoramic images from the point of view of the observer. In order to quantify the landscape, 360° panoramic images were transformed to the Interrupted Sinusoidal Projection using G.Projector software. A quantitative landscape assessment is carried out automatically with the following landscape classes: ground, low, medium, and high vegetation, buildings, water, and sky according to the LiDAR 1.2 standard. The results of the analysis are presented quantitatively—the percentage distribution of landscape classes in the 360° field of view. In order to fully describe the landscape around the observer, graphs of little planets have been proposed to interpret the obtained results. The usefulness of the developed methodology, together with examples of its application and the way of presenting the results, is described. The proposed Quantitative Landscape Assessment method (QLA360) allows quantitative landscape assessment to be performed in the 360° field of view without the need to carry out field surveys. The QLA360 uses LiDAR American Society of Photogrammetry and Remote Sensing (ASPRS) classification standards, which allows one to avoid differences resulting from the use of different algorithms for classifying images in semantic segmentation. The most important advantages of the method are as follows: observer-independent, 360° field of view which simulates human perspective, automatic operation, scalability, and easy presentation and interpretation of results.
Collapse
|
19
|
MAP-Vis: A Distributed Spatio-Temporal Big Data Visualization Framework Based on a Multi-Dimensional Aggregation Pyramid Model. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10020598] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
During the exploration and visualization of big spatio-temporal data, massive volume poses a number of challenges to the achievement of interactive visualization, including large memory consumption, high rendering delay, and poor visual effects. Research has shown that the development of distributed computing frameworks provides a feasible solution for big spatio-temporal data management and visualization. Accordingly, to address these challenges, this paper adopts a proprietary pre-processing visualization scheme and designs and implements a highly scalable distributed visual analysis framework, especially targeted at massive point-type datasets. Firstly, we propose a generic multi-dimensional aggregation pyramid (MAP) model based on two well-known graphics concepts, namely the Spatio-temporal Cube and 2D Tile Pyramid. The proposed MAP model can support the simultaneous hierarchical aggregation of time, space, and attributes, and also later transformation of the derived aggregates into discrete key-value pairs for scalable storage and efficient retrieval. Using the generated MAP datasets, we develop an open-source distributed visualization framework (MAP-Vis). In MAP-Vis, a high-performance Spark cluster is used as a parallel preprocessing platform, while distributed HBase is used as the massive storage for the generated MAP data. The client of MAP-Vis provides a variety of correlated visualization views, including heat map, time series, and attribute histogram. Four open datasets, with record numbers ranging from the millions to the tens of billions, are chosen for system demonstration and performance evaluation. The experimental results demonstrate that MAP-Vis can achieve millisecond-level query response and support efficient interactive visualization under different queries on the space, time, and attribute dimensions.
Collapse
|
20
|
Huang Z, Zhao Y, Chen W, Gao S, Yu K, Xu W, Tang M, Zhu M, Xu M. A Natural-language-based Visual Query Approach of Uncertain Human Trajectories. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1256-1266. [PMID: 31443013 DOI: 10.1109/tvcg.2019.2934671] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Visual querying is essential for interactively exploring massive trajectory data. However, the data uncertainty imposes profound challenges to fulfill advanced analytics requirements. On the one hand, many underlying data does not contain accurate geographic coordinates, e.g., positions of a mobile phone only refer to the regions (i.e., mobile cell stations) in which it resides, instead of accurate GPS coordinates. On the other hand, domain experts and general users prefer a natural way, such as using a natural language sentence, to access and analyze massive movement data. In this paper, we propose a visual analytics approach that can extract spatial-temporal constraints from a textual sentence and support an effective query method over uncertain mobile trajectory data. It is built up on encoding massive, spatially uncertain trajectories by the semantic information of the POls and regions covered by them, and then storing the trajectory documents in text database with an effective indexing scheme. The visual interface facilitates query condition specification, situation-aware visualization, and semantic exploration of large trajectory data. Usage scenarios on real-world human mobility datasets demonstrate the effectiveness of our approach.
Collapse
|
21
|
Dong R, Zhang Y, Zhao J. How Green Are the Streets Within the Sixth Ring Road of Beijing? An Analysis Based on Tencent Street View Pictures and the Green View Index. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2018; 15:ijerph15071367. [PMID: 29966237 PMCID: PMC6068519 DOI: 10.3390/ijerph15071367] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Revised: 06/22/2018] [Accepted: 06/26/2018] [Indexed: 12/28/2022]
Abstract
Street greenery, an important urban landscape component, is closely related to people’s physical and mental health. This study employs the green view index (GVI) as a quantitative indicator to evaluate visual greenery from a pedestrian’s perspective and uses an image segmentation method to calculate the quantity of visual greenery from Tencent street view pictures. This article aims to quantify street greenery in the area within the sixth ring road in Beijing, analyse the relations between road parameters and the GVI, and compare the visual greenery of different road types. The authors find that (1) the average GVI value in the study area is low, with low-value clusters inside the third ring road and high-value clusters outside; (2) wider minor roads tend to have higher GVI values than motorways, major roads and provincial roads; and (3) longer roads, except expressways, tend to have higher GVI values. This case study demonstrates that the GVI can effectively represent the quantity of visual greenery along roads. The authors’ methods can be employed to compare street-level visual greenery among different areas or road types and to support urban green space planning and management.
Collapse
Affiliation(s)
- Rencai Dong
- State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing 100085, China.
| | - Yonglin Zhang
- State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing 100085, China.
- College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Jingzhu Zhao
- State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing 100085, China.
| |
Collapse
|
22
|
Impacts of Street-Visible Greenery on Housing Prices: Evidence from a Hedonic Price Model and a Massive Street View Image Dataset in Beijing. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2018. [DOI: 10.3390/ijgi7030104] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
23
|
Zeng W, Ye Y. VitalVizor: A Visual Analytics System for Studying Urban Vitality. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2018; 38:38-53. [PMID: 30273126 DOI: 10.1109/mcg.2018.053491730] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Creating lively places with high urban vitality is an ultimate goal for urban planning and design. The VitalVizor visual analytics system employs well-established visualization and interaction techniques to facilitate user exploration of spatial physical entities and non-spatial urban design metrics when studying urban vitality.
Collapse
|