Three-Dimensional Field of Light Display
Navy STTR 2019.B - Topic N19B-T036
NAVSEA - Mr. Dean Putnam - firstname.lastname@example.org
Opens: May 31, 2019 - Closes: July 1, 2019 (8:00 PM ET)
TECHNOLOGY AREA(S): Human Systems
ACQUISITION PROGRAM: PEO IWS 1.0, AEGIS Integrated Combat System (IWS 1.0) Program Office
OBJECTIVE: Develop a Human Machine Interface (HMI) for three-dimensional (3D) Field of Light Display (FoLD) visualization systems to reduce cognitive burden and enable 3D collaborative environments.
DESCRIPTION: As the Navy continues to reduce manpower requirements associated with operating ever- increasing technologically complex systems, new methods that enable natural and intuitive interaction with 3D data are required to reduce overall operator workload and to enhance situational awareness. Operators who cannot quickly access and interpret data are prone to errors ranging from missing critical data during tactical situations, to making judgments based on incorrect information. Developing an optimized capability to engage with 3D information in a high-stress environment will allow the warfighter to increase task accuracy, reduce response time, and increase overall situational awareness.
Field of Light Display (FoLD) systems are a class of autostereoscopic displays that provide 3D aerial visualizations without head tracking or eyewear (which impedes natural human vision), allowing for natural communication and collaboration among decision makers. In addition, FoLD systems provide 3D visualization without regard to viewer position or gaze direction and present correct imagery perspective to all viewers within the display’s projection
frustum. Several studies highlight the advantages of 3D light-field holograms in enabling better mission planning or medical training. [Ref 1, 2, 3]. By presenting the 3D scene in a natural manner, the cognitive load on the viewer(s) is decreased and the ability to make decisions based on complicated information grows.
Deconflicting prioritization in the various theaters of air, surface, and subsurface proves challenging due to the 3D nature of the data and its subsequent visualization on two-dimensional (2D) displays. Currently, the operator is required to divert attention from their tasks to click through multiple menus to obtain such metadata as ascending or descending attributions, latitude and longitude, trajectory, and asset state. FoLD technologies provide several novel capabilities for reducing cognitive load on operators performing identification (ID) during volume searches.
However, the way in which a user interacts with this 3D information requires more investigation to determine optimal human/FoLD interface.
Much of the FoLD research to date has focused on the 3D projection aspects of producing a 3D aerial image. Of equal importance is the manner in which humans interact with the 3D image to make command level decisions. For all FoLD systems, the 3D aerial image is ethereal and lacking both tactile and kinesthetic feedback. In some FoLD systems, all or part of the 3D image may be enclosed behind a transparent enclosure or cover glass.
The Navy requires a mechanism for interacting with emerging FoLDs that will provide an optimized ability for the user to engage with 3D information in a high-stress environment. These types of environments can be replicated by laboratory testing of current operator display system tasks/scenarios (ascending or descending attributions, latitude and longitude, trajectory, and asset state) which adhere to the full spectrum of the Combat Information Center (CIC) and/or watch stander environments. The proposed solution will also need to consider the resulting physical and psychological effects on the user. Understanding how humans process information through cognitive load theory, human computer interaction (HCI), and multi-modal learning should be a part of the design process for a meaningful solution. Performance will be measured by assessing task completion times, user cognitive load analysis, and physical impacts of the proposed system.
This innovation must be a novel and practical solution to providing interactivity with 3D imagery produced by a FoLD system. The solution must allow accurate and repeatable interactivity and operation within the view volume. The ability to select, rotate, scale, translate, and otherwise manipulate 3D objects within the 3D scene is required. The research and development effort may include 3D pucks, trackballs, mice, wands, gloves, hand position sensors, video game controllers, or any other comparable technology.
The solution must execute with little to no impact on the computational performance of the combat system environment under test. The proposed Human Machine Interface (HMI) should work with a variety of FoLD implementation independent technologies and complex tasks, support ruggedization for use in harsh environments, allow for natural and intuitive operation (minimize training) and support multiple simultaneous users.
PHASE I: Provide a concept for a FoLD HMI for interacting with a 3D image. The concept must show that it can feasibly meet the requirements of the Description. Establish feasibility through modeling and demonstration of the HMI concept. Develop a Phase II plan which includes human subject testing. In preparation for the human subject testing to take place during Phase II, Institutional Review Board (IRB) approval must be acquired during the Phase
I. The Phase I Option, if exercised, will include the initial design specifications and capabilities description to build a prototype solution in Phase II.
PHASE II: Develop and deliver a FoLD HMI interactive device prototype that is capable of demonstrating the implementation and integration into the combat system environment for testing and evaluation. Demonstrate accuracy, repeatability, and functionality, adhering to the requirements outlined in the Description requirements. Perform a demonstration at a Land Based Test Site (LBTS), which represents an unclassified simulation environment, provided by the Government.
PHASE III DUAL USE APPLICATIONS: Support the Navy in transitioning the technology to Navy use and support further refinement and testing of the HMI's functionality following successful prototype development and demonstration. Upon capability demonstration and quantifiable test results, direct the focus toward the transition and integration of the HMI with the emerging FoLD Systems for a 2024 technical insertion as a component of the Aegis Combat System.
This HMI device will allow users to interact with data in a 3D environment naturally and intuitively and greatly enhance pre-operative planning and post-operative reviews by surgeons, medical students, and hospital staff.
1. Sven Fuhrmann, N. J. “Investigating Geospatial Hologram for Special Weapons and Tactics Teams." Cartographics Perspectives, 2009. http://cartographicperspectives.org/index.php/journal/article/viewFile/cp63- fuhrmann-et-al/214
2. Hackett, M. “Medical Holography for Basic Anatomy Training.” Orlando: Interservice/Industry Training, Simulation and Education Conference (I/ITSEC), 2013. https://cdn2.hubspot.net/hub/151303/file-476620026- pdf/docs/medical_holograms_whitepaper.pdf
3. Burnett, Thomas. “Light-field Display Architecture and the Challenge of Synthetic Light-field Radiance Image Rendering.” SID. 2017. https://www.researchgate.net/publication/318144885_61-1_Invited_Paper_Light- field_Display_Architecture_and_the_Challenge_of_Synthetic_Light-field_Radiance_Image_Rendering
KEYWORDS: FoLD; 3D Light-field Holograms; Human Machine Interface; 3D Aerial Image; Human Computer Interaction; 3D visualization