Augmented Reality, Virtual Reality & Semantic 3D Reconstruction
Augmented reality is a key technology that will facilitate a major paradigm shift in the way users interact with data and has only just recently been recognized as a viable solution for solving many critical needs. In practical terms, this innovation can be used to visualize data from hundreds of se...
Saved in:
HerausgeberIn: | |
---|---|
Sonstige: | |
Year of Publication: | 2022 |
Language: | English |
Physical Description: | 1 electronic resource (304 p.) |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
993577376104498 |
---|---|
ctrlnum |
(CKB)5470000001633503 (oapen)https://directory.doabooks.org/handle/20.500.12854/95825 (EXLCZ)995470000001633503 |
collection |
bib_alma |
record_format |
marc |
spelling |
Lv, Zhihan edt Augmented Reality, Virtual Reality & Semantic 3D Reconstruction Basel MDPI - Multidisciplinary Digital Publishing Institute 2022 1 electronic resource (304 p.) text txt rdacontent computer c rdamedia online resource cr rdacarrier Open access Unrestricted online access star Augmented reality is a key technology that will facilitate a major paradigm shift in the way users interact with data and has only just recently been recognized as a viable solution for solving many critical needs. In practical terms, this innovation can be used to visualize data from hundreds of sensors simultaneously, overlaying relevant and actionable information over your environment through a headset. Semantic 3D reconstruction unlocks the promise of AR technology, possessing a far greater availability of semantic information. Although, there are several methods currently available as post-processing approaches to extract semantic information from the reconstructed 3D models, the results obtained results have been uncertain and evenly incorrect. Thus, it is necessary to explore or develop a novel 3D reconstruction approach to automatically recover 3D geometry model and obtained semantic information simultaneously. The rapid advent of deep learning brought new opportunities to the field of semantic 3D reconstruction from photo collections. Deep learning-based methods are not only able to extract semantic information but can also enhance fundamental techniques in semantic 3D reconstruction, techniques which include feature matching or tracking, stereo matching, camera pose estimation, and use of multi-view stereo methods. Moreover, deep learning techniques can be used to extract priors from photo collections, and this obtained information can in turn improve the quality of 3D reconstruction. English Technology: general issues bicssc History of engineering & technology bicssc feature tracking superpixel structure from motion three-dimensional reconstruction local feature multi-view stereo construction hazard safety education photoreality virtual reality anatomization audio classification olfactory display deep learning transfer learning inception model augmented reality higher education scientific production web of science bibliometric analysis scientific mapping applications in subject areas interactive learning environments 3P model primary education educational technology mobile lip reading system lightweight neural network face correction virtual reality (VR) computer vision projection mapping 3D face model super-resolution radial curve Dynamic Time Warping semantic 3D reconstruction eye-in-hand vision system robotic manipulator probabilistic fusion graph-based refinement 3D modelling 3D representation game engine laser scanning panoramic photography super-resolution reconstruction generative adversarial networks dense convolutional networks texture loss WGAN-GP orientation positioning viewpoint image matching algorithm transformation ADHD EDAH assessment continuous performance test Photometric Stereo (PS) 3D reconstruction fully convolutional network (FCN) semi-immersive virtual reality children cooperative games empowerment perception motor planning problem-solving area of interest wayfinding spatial information one-shot learning gesture recognition GREN skeleton-based 3D composition pre-visualization stereo vision 360° video 3-0365-6061-0 Wang, Jing-Yan edt Kumar, Neeraj edt Lloret, Jaime edt Lv, Zhihan oth Wang, Jing-Yan oth Kumar, Neeraj oth Lloret, Jaime oth |
language |
English |
format |
eBook |
author2 |
Wang, Jing-Yan Kumar, Neeraj Lloret, Jaime Lv, Zhihan Wang, Jing-Yan Kumar, Neeraj Lloret, Jaime |
author_facet |
Wang, Jing-Yan Kumar, Neeraj Lloret, Jaime Lv, Zhihan Wang, Jing-Yan Kumar, Neeraj Lloret, Jaime |
author2_variant |
z l zl j y w jyw n k nk j l jl |
author2_role |
HerausgeberIn HerausgeberIn HerausgeberIn Sonstige Sonstige Sonstige Sonstige |
title |
Augmented Reality, Virtual Reality & Semantic 3D Reconstruction |
spellingShingle |
Augmented Reality, Virtual Reality & Semantic 3D Reconstruction |
title_full |
Augmented Reality, Virtual Reality & Semantic 3D Reconstruction |
title_fullStr |
Augmented Reality, Virtual Reality & Semantic 3D Reconstruction |
title_full_unstemmed |
Augmented Reality, Virtual Reality & Semantic 3D Reconstruction |
title_auth |
Augmented Reality, Virtual Reality & Semantic 3D Reconstruction |
title_new |
Augmented Reality, Virtual Reality & Semantic 3D Reconstruction |
title_sort |
augmented reality, virtual reality & semantic 3d reconstruction |
publisher |
MDPI - Multidisciplinary Digital Publishing Institute |
publishDate |
2022 |
physical |
1 electronic resource (304 p.) |
isbn |
3-0365-6062-9 3-0365-6061-0 |
illustrated |
Not Illustrated |
work_keys_str_mv |
AT lvzhihan augmentedrealityvirtualrealitysemantic3dreconstruction AT wangjingyan augmentedrealityvirtualrealitysemantic3dreconstruction AT kumarneeraj augmentedrealityvirtualrealitysemantic3dreconstruction AT lloretjaime augmentedrealityvirtualrealitysemantic3dreconstruction |
status_str |
n |
ids_txt_mv |
(CKB)5470000001633503 (oapen)https://directory.doabooks.org/handle/20.500.12854/95825 (EXLCZ)995470000001633503 |
carrierType_str_mv |
cr |
is_hierarchy_title |
Augmented Reality, Virtual Reality & Semantic 3D Reconstruction |
author2_original_writing_str_mv |
noLinkedField noLinkedField noLinkedField noLinkedField noLinkedField noLinkedField noLinkedField |
_version_ |
1796652733062709248 |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>05573nam-a2201381z--4500</leader><controlfield tag="001">993577376104498</controlfield><controlfield tag="005">20240327142942.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr|mn|---annan</controlfield><controlfield tag="008">202301s2022 xx |||||o ||| 0|eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">3-0365-6062-9</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(CKB)5470000001633503</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(oapen)https://directory.doabooks.org/handle/20.500.12854/95825</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(EXLCZ)995470000001633503</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Lv, Zhihan</subfield><subfield code="4">edt</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Augmented Reality, Virtual Reality & Semantic 3D Reconstruction</subfield></datafield><datafield tag="260" ind1=" " ind2=" "><subfield code="a">Basel</subfield><subfield code="b">MDPI - Multidisciplinary Digital Publishing Institute</subfield><subfield code="c">2022</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 electronic resource (304 p.)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="506" ind1=" " ind2=" "><subfield code="a">Open access</subfield><subfield code="f">Unrestricted online access</subfield><subfield code="2">star</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Augmented reality is a key technology that will facilitate a major paradigm shift in the way users interact with data and has only just recently been recognized as a viable solution for solving many critical needs. In practical terms, this innovation can be used to visualize data from hundreds of sensors simultaneously, overlaying relevant and actionable information over your environment through a headset. Semantic 3D reconstruction unlocks the promise of AR technology, possessing a far greater availability of semantic information. Although, there are several methods currently available as post-processing approaches to extract semantic information from the reconstructed 3D models, the results obtained results have been uncertain and evenly incorrect. Thus, it is necessary to explore or develop a novel 3D reconstruction approach to automatically recover 3D geometry model and obtained semantic information simultaneously. The rapid advent of deep learning brought new opportunities to the field of semantic 3D reconstruction from photo collections. Deep learning-based methods are not only able to extract semantic information but can also enhance fundamental techniques in semantic 3D reconstruction, techniques which include feature matching or tracking, stereo matching, camera pose estimation, and use of multi-view stereo methods. Moreover, deep learning techniques can be used to extract priors from photo collections, and this obtained information can in turn improve the quality of 3D reconstruction.</subfield></datafield><datafield tag="546" ind1=" " ind2=" "><subfield code="a">English</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Technology: general issues</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">History of engineering & technology</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">feature tracking</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">superpixel</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">structure from motion</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">three-dimensional reconstruction</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">local feature</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">multi-view stereo</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">construction hazard</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">safety education</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">photoreality</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">virtual reality</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">anatomization</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">audio classification</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">olfactory display</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">deep learning</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">transfer learning</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">inception model</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">augmented reality</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">higher education</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">scientific production</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">web of science</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">bibliometric analysis</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">scientific mapping</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">applications in subject areas</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">interactive learning environments</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">3P model</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">primary education</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">educational technology</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">mobile lip reading system</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">lightweight neural network</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">face correction</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">virtual reality (VR)</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">computer vision</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">projection mapping</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">3D face model</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">super-resolution</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">radial curve</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Dynamic Time Warping</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">semantic 3D reconstruction</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">eye-in-hand vision system</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">robotic manipulator</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">probabilistic fusion</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">graph-based refinement</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">3D modelling</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">3D representation</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">game engine</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">laser scanning</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">panoramic photography</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">super-resolution reconstruction</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">generative adversarial networks</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">dense convolutional networks</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">texture loss</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">WGAN-GP</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">orientation</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">positioning</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">viewpoint</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">image matching</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">algorithm</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">transformation</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">ADHD</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">EDAH</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">assessment</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">continuous performance test</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Photometric Stereo (PS)</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">3D reconstruction</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">fully convolutional network (FCN)</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">semi-immersive virtual reality</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">children</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">cooperative games</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">empowerment</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">perception</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">motor planning</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">problem-solving</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">area of interest</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">wayfinding</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">spatial information</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">one-shot learning</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">gesture recognition</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">GREN</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">skeleton-based</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">3D composition</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">pre-visualization</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">stereo vision</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">360° video</subfield></datafield><datafield tag="776" ind1=" " ind2=" "><subfield code="z">3-0365-6061-0</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Jing-Yan</subfield><subfield code="4">edt</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kumar, Neeraj</subfield><subfield code="4">edt</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lloret, Jaime</subfield><subfield code="4">edt</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lv, Zhihan</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Jing-Yan</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kumar, Neeraj</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lloret, Jaime</subfield><subfield code="4">oth</subfield></datafield><datafield tag="906" ind1=" " ind2=" "><subfield code="a">BOOK</subfield></datafield><datafield tag="ADM" ind1=" " ind2=" "><subfield code="b">2024-03-28 03:04:37 Europe/Vienna</subfield><subfield code="f">system</subfield><subfield code="c">marc21</subfield><subfield code="a">2023-01-18 06:03:23 Europe/Vienna</subfield><subfield code="g">false</subfield></datafield><datafield tag="AVE" ind1=" " ind2=" "><subfield code="i">DOAB Directory of Open Access Books</subfield><subfield code="P">DOAB Directory of Open Access Books</subfield><subfield code="x">https://eu02.alma.exlibrisgroup.com/view/uresolver/43ACC_OEAW/openurl?u.ignore_date_coverage=true&portfolio_pid=5342571980004498&Force_direct=true</subfield><subfield code="Z">5342571980004498</subfield><subfield code="b">Available</subfield><subfield code="8">5342571980004498</subfield></datafield></record></collection> |