• Reference Citation Analysis
  • v
  • v
  • Find an Article
Find an Article PDF (4627395)   Today's Articles (1340)   Subscriber (49588)
For: Kroner A, Senden M, Driessens K, Goebel R. Contextual encoder-decoder network for visual saliency prediction. Neural Netw 2020;129:261-270. [PMID: 32563023 DOI: 10.1016/j.neunet.2020.05.004] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2019] [Revised: 03/19/2020] [Accepted: 05/04/2020] [Indexed: 11/28/2022]
Number Cited by Other Article(s)
1
Makram AW, Salem NM, El-Wakad MT, Al-Atabany W. Robust detection and refinement of saliency identification. Sci Rep 2024;14:11076. [PMID: 38744990 DOI: 10.1038/s41598-024-61105-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 05/02/2024] [Indexed: 05/16/2024]  Open
2
Huo F, Liu Z, Guo J, Xu W, Guo S. UTDNet: A unified triplet decoder network for multimodal salient object detection. Neural Netw 2024;170:521-534. [PMID: 38043372 DOI: 10.1016/j.neunet.2023.11.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 10/11/2023] [Accepted: 11/22/2023] [Indexed: 12/05/2023]
3
Lai B, Liu M, Ryan F, Rehg JM. In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation and Beyond. Int J Comput Vis 2023;132:854-871. [PMID: 38371492 PMCID: PMC10873248 DOI: 10.1007/s11263-023-01879-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 08/10/2023] [Indexed: 02/20/2024]
4
Bruckert A, Christie M, Le Meur O. Where to look at the movies: Analyzing visual attention to understand movie editing. Behav Res Methods 2023;55:2940-2959. [PMID: 36002630 DOI: 10.3758/s13428-022-01949-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2022] [Indexed: 11/08/2022]
5
Malladi SPK, Mukherjee J, Larabi MC, Chaudhury S. Towards explainable deep visual saliency models. COMPUTER VISION AND IMAGE UNDERSTANDING 2023:103782. [DOI: 10.1016/j.cviu.2023.103782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
6
Novin S, Fallah A, Rashidi S, Daliri MR. An improved saliency model of visual attention dependent on image content. Front Hum Neurosci 2023;16:862588. [PMID: 36926377 PMCID: PMC10011177 DOI: 10.3389/fnhum.2022.862588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 11/14/2022] [Indexed: 03/08/2023]  Open
7
Fan S, Shen Z, Jiang M, Koenig BL, Kankanhalli MS, Zhao Q. Emotional Attention: From Eye Tracking to Computational Modeling. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023;45:1682-1699. [PMID: 35446761 DOI: 10.1109/tpami.2022.3169234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
8
Audio–visual collaborative representation learning for Dynamic Saliency Prediction. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
9
Liu N, Li L, Zhao W, Han J, Shao L. Instance-Level Relative Saliency Ranking With Graph Reasoning. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022;44:8321-8337. [PMID: 34437057 DOI: 10.1109/tpami.2021.3107872] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
10
Multi-task visual discomfort prediction model for stereoscopic images based on multi-view feature representation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04156-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
11
Zeng L, Li T, Wang X, Chen L, Zeng P, Herrin JS. UNetGE: A U-Net-Based Software at Automatic Grain Extraction for Image Analysis of the Grain Size and Shape Characteristics. SENSORS (BASEL, SWITZERLAND) 2022;22:5565. [PMID: 35898069 PMCID: PMC9330053 DOI: 10.3390/s22155565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 07/12/2022] [Accepted: 07/19/2022] [Indexed: 06/15/2023]
12
Pei J, Zhou T, Tang H, Liu C, Chen C. FGO-Net: Feature and Gaussian Optimization Network for visual saliency prediction. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03647-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
13
TranSalNet: Towards perceptually relevant visual saliency prediction. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
14
DeepRare: Generic Unsupervised Visual Attention Models. ELECTRONICS 2022. [DOI: 10.3390/electronics11111696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
15
Hayes TR, Henderson JM. Meaning maps detect the removal of local semantic scene content but deep saliency models do not. Atten Percept Psychophys 2022;84:647-654. [PMID: 35138579 PMCID: PMC11128357 DOI: 10.3758/s13414-021-02395-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/12/2021] [Indexed: 11/08/2022]
16
Futagami T, Hayasaka N. Improvement in automatic food region extraction based on saliency detection. INTERNATIONAL JOURNAL OF FOOD PROPERTIES 2022. [DOI: 10.1080/10942912.2022.2055056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
17
Object Categorization Capability of Psychological Potential Field in Perceptual Assessment Using Line-Drawing Images. J Imaging 2022;8:jimaging8040090. [PMID: 35448217 PMCID: PMC9026922 DOI: 10.3390/jimaging8040090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 03/24/2022] [Accepted: 03/25/2022] [Indexed: 12/04/2022]  Open
18
Where Is My Mind (Looking at)? A Study of the EEG–Visual Attention Relationship. INFORMATICS 2022. [DOI: 10.3390/informatics9010026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]  Open
19
Amunts K, DeFelipe J, Pennartz C, Destexhe A, Migliore M, Ryvlin P, Furber S, Knoll A, Bitsch L, Bjaalie JG, Ioannidis Y, Lippert T, Sanchez-Vives MV, Goebel R, Jirsa V. Linking Brain Structure, Activity, and Cognitive Function through Computation. eNeuro 2022;9:ENEURO.0316-21.2022. [PMID: 35217544 PMCID: PMC8925650 DOI: 10.1523/eneuro.0316-21.2022] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 01/11/2022] [Accepted: 01/17/2022] [Indexed: 01/19/2023]  Open
20
Pedziwiatr MA, Kümmerer M, Wallis TSA, Bethge M, Teufel C. Semantic object-scene inconsistencies affect eye movements, but not in the way predicted by contextualized meaning maps. J Vis 2022;22:9. [PMID: 35171232 PMCID: PMC8857618 DOI: 10.1167/jov.22.2.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]  Open
21
Review of Visual Saliency Prediction: Development Process from Neurobiological Basis to Deep Models. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app12010309] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
22
Luna-Jiménez C, Griol D, Callejas Z, Kleinlein R, Montero JM, Fernández-Martínez F. Multimodal Emotion Recognition on RAVDESS Dataset Using Transfer Learning. SENSORS 2021;21:s21227665. [PMID: 34833739 PMCID: PMC8618559 DOI: 10.3390/s21227665] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 11/12/2021] [Accepted: 11/15/2021] [Indexed: 11/29/2022]
23
Deng X, Zhang Z. Sparsity-control ternary weight networks. Neural Netw 2021;145:221-232. [PMID: 34773898 DOI: 10.1016/j.neunet.2021.10.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 08/10/2021] [Accepted: 10/21/2021] [Indexed: 11/18/2022]
24
Hierarchical Domain-Adapted Feature Learning for Video Saliency Prediction. Int J Comput Vis 2021. [DOI: 10.1007/s11263-021-01519-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
25
Malladi SPK, Mukhopadhyay J, Larabi C, Chaudhury S. Lighter and Faster Cross-Concatenated Multi-Scale Residual Block Based Network for Visual Saliency Prediction. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 2021. [DOI: 10.1109/icip42928.2021.9506710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
26
Hayes TR, Henderson JM. Deep saliency models learn low-, mid-, and high-level features to predict scene attention. Sci Rep 2021;11:18434. [PMID: 34531484 PMCID: PMC8445969 DOI: 10.1038/s41598-021-97879-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 08/31/2021] [Indexed: 02/08/2023]  Open
27
Deep saliency models : The quest for the loss function. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.06.131] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
28
Predicting atypical visual saliency for autism spectrum disorder via scale-adaptive inception module and discriminative region enhancement loss. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.06.125] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
29
Guided Spatial Transformers for Facial Expression Recognition. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11167217] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
30
Svanera M, Morgan AT, Petro LS, Muckli L. A self-supervised deep neural network for image completion resembles early visual cortex fMRI activity patterns for occluded scenes. J Vis 2021;21:5. [PMID: 34259828 PMCID: PMC8288063 DOI: 10.1167/jov.21.7.5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 05/14/2021] [Indexed: 11/24/2022]  Open
31
Hierarchical Multimodal Adaptive Fusion (HMAF) Network for Prediction of RGB-D Saliency. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020;2020:8841681. [PMID: 33293945 PMCID: PMC7700038 DOI: 10.1155/2020/8841681] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2020] [Revised: 11/03/2020] [Accepted: 11/07/2020] [Indexed: 11/19/2022]
PrevPage 1 of 1 1Next
© 2004-2024 Baishideng Publishing Group Inc. All rights reserved. 7041 Koll Center Parkway, Suite 160, Pleasanton, CA 94566, USA