找回密码
 注册

QQ登录

只需一步,快速开始

新浪微博登陆

只需一步, 快速开始

扫一扫,访问微社区

快捷导航
事务所专题-柯南20周年纪念事件簿
搜索
查看: 1359|回复: 15
打印 上一主题 下一主题

[原创] 随手画~

[复制链接]

荣誉警部
蘭哀鷹派小頭目
原创同人第一名
哦呵呵呵呵
哇哈哈哈哈

2011红白歌会冠军

245

主题

28

好友

7912

积分

 

帖子
16787
精华
151
积分
7912
威望
1974
RP
14426
金钱
10379 柯币
人气
4222 ℃
注册时间
2003-11-17

M13票房保卫战功臣

跳转到指定楼层
顶楼
发表于 2009-3-13 23:59:35 |只看该作者 |倒序浏览
本帖最后由 谁的马甲 于 2012-3-19 13:37 编辑


偶尔也画些别的啦~完全是随手画下来的姿势和颜色,主要是想回顾一下逆光啦不过好像表现得很不明显OTL



New User Interface

Background information and current technology development

Human-computer interaction has evolved from primitive command-line interfaces to the current paradigm for computer interaction: the WIMP (window, icon, menu, pointing device) metaphor of the ubiquitous graphical user interface (GUI), which allows users to interact with computer applications and systems. AR (augmented reality) requires a further evolution of the UI paradigm to provide intuitive and natural interactions with the augmented, real-world local environments. Ideally, the UI should be designed so that the user experiences it as a natural extension of their body, rather than a GUI on a device or screen.
One of the leading projects with AR UIs is the MIT Media Lab Sixth Sense project (see http://blog.wired.com/business/2009/02/ted-digital-six.html). The MIT team has developed a wearable computing system that removes restrictions of the traditional WIMP interface and produces a more intuitive and fluid interaction. The Sixth Sense system employs a camera, phone, projector, and colored finger caps (as fiduciary markers to allow the camera to track and interpret finger motions) to allow a user to interact with a computing device. Sixth Sense replaces the fixed window (screen) of traditional GUIs, and uses any available surface as an interactive display screen. Using the system, a user can summon and dismiss virtual devices or applications (such as a virtual watch) and interact with that device using intuitive gestures, rather than with an independent mechanical pointing device, such as a mouse. For example, a user could summon a virtual watch by “drawing” a clock with their hand, and dismiss the watch by dragging a finger across it.


The system is worn as a pendant, which keeps the hands free for manipulating real and virtual objects. For example, the figure to the right shows a phone numeric pad projected onto a hand. The index finger is used to press the virtual buttons, analogous to dialing a real phone.

The Apple iPhone, with its multi-touch control, integrates advanced haptic features into the UI; such features heighten the user’s immersion in the application. However, the UI is similar to the WIMP paradigm with the fingers replacing a mechanical pointing device. Furthermore, the majority of the user’s attention and interaction is focused on the screen, not the local environment, which limits the user’s immersion in the overall AR experience.
Similarly, Microsoft’s Kinect for Xbox 360 is a “controller-free gaming and entertainment” system. Using a webcam-style add-on peripheral for the Xbox 360 console, it enables users to control and interact with the Xbox 360, without the need to touch a game controller, through a natural user interface using gestures, spoken commands, or presented objects and images.

The device features an "RGB camera, depth sensor and multi-array microphone running proprietary software" which provides full-body 3D motion capture, facial recognition, and voice recognition capabilities.
In addition, Microsoft’s Surface computer, introduced in May of 2007, sought to change the way users interact with digital content by allowing them to use natural gestures and touch Version:(http://www.microsoft.com/surface/Default.aspx). Surface features a 30-inch tabletop display that allows several people to work independently or simultaneously. Microsoft envisions a variety of uses—in one example, people place a card on the table to call up a virtual stack of digital photos from a computer server and then rotate, resize and spread them across the table using their hands. In another, diners split a tab by dragging icons of their meals to their credit cards. Vis-à-vis AR, the Surface UI uses a simple set of primitives that might be of interest (especially parts that interact with real world objects) to problems faced by UI for AR.
Surface includes the following components:
Multitouch screen. A diffuser turns the Surface's acrylic tabletop into a large horizontal "multi-touch" screen, capable of processing multiple inputs from multiple users. The Surface can also recognize objects by their shapes or by reading coded "domino" tags.
Infrared vision. Surface's "machine vision" operates in the near-infrared spectrum, using an 850-nanometer-wavelength LED light source aimed at the screen. When objects touch the tabletop, the light reflects back and is picked up by multiple infrared cameras with a net resolution of 1280 x 960.
Computer and operating system. Surface uses many of the same components found in everyday desktop computers, including a Core 2 Duo processor, 2GB of RAM and a 256MB graphics card. Wireless communication with devices on the surface is handled using Wi-Fi and Bluetooth antennas (future versions may incorporate radio frequency identification, or RFID, or near field communications). The underlying operating system is a modified version of Microsoft Vista.
Projector. Surface uses the same DLP light engine found in many rear-projection HDTVs. The footprint of the visible light screen, at 1024 x 768 pixels, is actually smaller than the invisible overlapping infrared projection to allow for better recognition at the edges of the screen.


Other demos of AR can be found at the following sites:
http://www.youtube.com/watch?v=uGNgyGU-81E
http://www.youtube.com/watch?v=xGsfDDxhFN0
http://www.locus.org.uk/publications/Aslib2007.pdf
http://www.youtube.com/watch?v=LmRd6XtVF0U
http://www.youtube.com/watch?v=gNY_kODnJR8
http://research.microsoft.com/en-us/um/people/awilson/papers/Wilson%20PlayAnywhere%20UIST%202005.pdf
http://www.youtube.com/watch?v=g8Eycccww6k
Many previous solutions do not sufficiently address the basic components of intuitive and natural immersive user interaction. These components include:
Naïve physics. Users have common sense knowledge about their physical environment. Users experience and recognize the effects of gravity, motion, acceleration, inertia, mass, and physicality of objects. For example, inertia is addressed in the Apple iPhone in its scroll functions where following rapid scroll motion, a list will continue to move although the user has stopped direct interaction. However, the feature does not convey physicality or offer the user a chance to interact in other ways.
Body awareness and skills. Users are aware of their bodies and possess skills for controlling and coordinating them. The Sixth Sense project acknowledges body awareness and skills, employing users’ gestures to control the system. However, the system is limited—it requires fiducials and minimal use of body parts other than the fingers. For example, a more natural system might automatically summon a watch when a user holds their wrist in a natural watch-reading position.
Environment awareness and skills. Users are aware of their surroundings and possess skills for negotiating, manipulating, and navigating within their environment. Sensor systems including GPS, digital compass, cameras and accelerometers provide users with augmented details about the spatial relationships within their environment.
between allocentric (3-D “birds-eye” spatial data) and egocentric (“first-person”spatial data) representations that are useful for navigating. Interface usage of this awareness has largely been limited to games and experimental file systems that use a spatial metaphor but are less efficient than the WIMP implementation.
Social awareness and skills. Ideal AR implementations should be aware of and smoothly and unobtrusively integrate with social environments. AR applications should not limit users’ social awareness and their ability to interact with the real, local environment (unlike what is often seen today with office workers who focus on Blackberry devices while in social settings).

New  UI research topics:
1.
Simplified marker-less gesture interfaces. Simplified interfaces that do not require users to wear markers, thus providing an interaction with the AR system that is more intuitive and natural, further permitting deeper immersion.
2.
Gesture tracking mechanisms for mobile systems. Gesture tracking using resource- constrained devices, in lighting conditions that are non-ideal, and only using simple cameras (e.g. without infra-red) already present in a typical mobile device.
3.
Gesture-based UI for multi-person manipulation of public systems. New schemes to enable multiple AR users to interact with each other and manipulate data, especially in large public spaces.
4.
Methods for manipulating virtual objects with modalities that match different modes of real-world use. For example, incorporating intuitive naïve physics by requiring increased application of force on a gesture-based control system to move a heavy virtual object.
5.
WIMP replacements. Gesture-based replacements for traditional (WIMP) screens, control panels, keyboards, and pointing devices.
6.
Large natural interfaces for supporting direct manipulation of visual content. How do we enable large natural ―surface-type‖ interfaces that enable multiple individuals to collaborate using direct touch or manipulation of virtual / augmented objects?
7.
Simplified command interfaces for multifunction control. How do we enable simplified command interfaces to provide a more intuitive and natural interaction especially in resource constrained devices, using haptics? How do touch-based / haptic interfaces need to evolve to be applicable for devices of various sizes (mobile phone, ebook reader, gaming systems, desktop, multi-person collaborative displays)?
8.
Methods for manipulating virtual objects with modalities that match different modes of real-world use. How to incorporate intuitive naïve physics in all aspects of AR UI—for example, requiring increased application of force on a tactile or haptic control to move a heavy virtual object.
9.
Contact-less haptics. How to provide a haptic feedback when the individual is not in physical contact with a device or display surface?
10.
New ways to translate real-world interactions to AR using haptics. What new ways are needed to manipulate virtual objects that can be touched, pushed, lifted, moved etc. in closely analogous ways to objects in the real-world?
11.
Usability hurdles. How to overcome ease-of-use issues in a predominantly haptics oriented interface, especially sensitive to the specific usage context—for example, automobile display vs. desktop display, surgeon with gloves vs. gamer in a manipulation move?


User Interface and Service based on Ad-hoc Networking of Mobile Devices

Background information and current technology development

Co-located photo-sharing on mobile devices. One paper presents research into technology that facilitates the sharing of digital photographs using mobile devices. Researchers propose an application, called FunkyShare, for use in sharing digital photos on a mobile device in a group setting. The tool allows users to use PDAs to share photographs in a co-located group setting by enhancing the co-located photo-sharing experience and the social interactions around this experience. The FunkyShare application consists of two parts: the graphical user interface and the backend networking functions of the application.

Multi-sensor context-awareness in mobile devices and smart artifacts. Another research presents ways to augment mobile devices with environmental and situational awareness as context. The paper proposes the integration of diverse, simple sensors as an alternative to generic sensors for position and vision. These sensors can glean awareness of situational context that cannot be inferred from location and can be targeted at resource constrained device platforms that typically do not permit processing of visual context.
The researchers have developed an awareness module used for augmenting a mobile phone, devices and technologies that exemplify context-enabled everyday artifacts, and platforms for situation aware mobile devices. The awareness device was implemented as a plug-in for mobile host devices (see figure at right) and can be applied for augmentation of a mobile phone.

Situation-aware ad-hoc social interaction. Another patent application describes a method that automatically builds a social profile for a user by monitoring the user’s electronic device usage. The method seeks to provide a social networking service for a user to indulge in situation-aware ad-hoc social interactions. Social networking services are automatically provided to users based on information derived from user devices, thus forming social networks in an ad hoc fashion, without requiring the users to subscribe, register, or login to the services to join the network. In this way, people are automatically identified based on the information derived from user devices.

UI based on Ad-hoc networking research topics:

1.
Innovative uses for sensor and/or network combinations to provide useful and valuable services.
2.
Power-aware methods specific to these devices for gathering useful network or sensor data that would be otherwise too expensive.
3.
Techniques for discovering useful, purpose-directed ad-hoc networks of mobile devices.
4.
Techniques for anonymization and privacy preservation for services arising from these technologies, or, alternatively, methods for robust context-sensitive authentication.
5.
Techniques for establishing trust in an ad-hoc network.



Brain Computing Interface (BCI)

Background information and current technology development

The final frontier in immersive interaction environment is non-invasive brain-computer interface. BCI requires the generation of brain signals in a predictable way that can be transformed into control signals. It allows users to control computers and applications with thoughts, the goal being to translate the intent of a subject directly into commands for a computer application or a neuroprosthesis (3). Many installations tend to be bulky, appropriated for specific use, and can require multiple training sessions before being reliable enough for near natural interactivity. Research in this area is fertile enough for IBM predicts that mind reading is less than five years away (4). Innovation in BCI can usher in the next paradigm of truly personalized mobile computing.

All in the mind. For many years the majority of BCI research and development focused on clinical use or on individuals suffering from motor disabilities that prevent them from using keyboards and other convention tools. They concentrate on cursor movement to select letters or icons from established palettes.  Often times they utilize numerous sensors, require the use of gels, and are not easily scaled for use in mobile device. Recently, BCI has emerged as a control mechanism for other application such as online games. Neurosky has developed a unique solution that can track a number of brainwaves with one single dry sensor to sense sleep, alertness, concentration, and stress levels (18). Emotiv’s hardware requires several sensors and extensive training, but is said to be able to extract thoughts, feelings, and facial expressions (19). On a more playful note, eNecomimi by Neurowear (powered by Neurosky), the brainwave controlled cat ears that senses the wearer’s concentration level, won the Time Magazine’s 50 best invention of the Year (20).


Figure xx: Neurosky Mindwave (Source #18)
Figure xxx: Emotiv EPOC (Source #19)
Figure xx3: Necomimi by Neurowear (Source #20)

Meanwhile, Toyota, along with RIKEN, has taken a different tack by demonstrating a BCI controlled wheelchair via a skull cap equipped with EEG electrode array (21).

Figurexx :  Toyota BCI wheelchair interface (Source #21)



BCI (Brain Computing Interface) research topics:

Non-invasive, dry sensor(s) and method(s) for emotion extraction and expressions to provide:
1.
Seamless thought-based access to handset applications (e.g. just thinking about calling a relative)
2.
Unique multimodal interaction for better multitasking
3.
Low-cost real-time gaming interface as well as better biosignal usage for faster response and character submersion.
4.
Tracking of user’s physiological brainwave profile (mood, alertness)
5.
Machine learning and training to assist in personalizing the mobile handset to a unique user’s features and preferences, and customization
6.
Assistance for the physically or emotionally challenged, the elderly, and children



Human Touch

Background information and current technology development
Human Area Networks (HAN), closely associated to but different from BAN (Body Area Networks), has seen increased development since 2002. It uses the minute electric field emitted on the surface of the human body as a safe, high speed network transmission path formed when one part of the body (hands, fingers, arms, feet, face, legs or torso) touches a transceiver (doorknob, car, appliances, other people). Clothing does not necessarily hinder its usage, and communication can be terminated by losing physical contact. Simple, everyday actions – touching, sitting, walking, gripping, stepping, etc., can become triggers for locking/unlocking, starting/stopping, and other actions. Combining HAN with mobile devices can enhance security, provide wireless connectivity between music player and headphones, enable the passing of files between digital camera and computer, shaking hands to exchange business cards and swapping of phone numbers by kissing (5).
In another fashion, taking advantage of the body’s characteristics, bio-acoustic technologies similar to the Skinput concept by Microsoft Research, allow the body to be used as an input medium (6). Because of diverse acoustic signals of bone and skin, the ability to appropriate different areas for different commands can provide near limitless areas of interaction and personalization.
Bone conduction technologies (most notably in bonephones) also receive widespread attention. Sensors placed in front of or behind the ear transmit sound to the inner ear via vibrations through the skull.  More extensive use of bone conduction can considerably improve the quality of life in the hearing impaired, and combat the growing concern of hearing loss among the iPod youth. One study showed hearing loss among teens rose from 15% in the early ‘90s to 19% in 2005, or a 30% increase (7). Another study by Tel Aviv University on the effects of personal listening device (PLD) habits, combined with acoustic measurement results, indicate that a quarter of teens tested were at severe risk for hearing loss (8).

All in the body. RedTacton has been one of the pioneers in HAN development. Being broadband and interactive in nature, the duplex communication speed achieved so far is around 10Mbit/s. Transmission speed does not suffer in congested areas since the transmission path is the body itself. (5) Security issues are not of high concern since users can interrupt communication at will by releasing the object being touched.


Fig xx: RedTacton Human Area Network  (Source #5)                          
    Fig xx: Skinput (Source #6)

Skinput (Microsoft Research) employs the human body to deliver acoustic signals, effectively making the skin an input medium. Mechanical vibrations propagating through the body are measured by sensors on an armband to determine the location of the taps (6).

Bone conduction as a viable solution in a mobile design has been discussed since the mid ‘90s. In 2003, a wrist-worn apparatus called FingerWhisper used these concepts to convert incoming sounds from the wrist into vibrations sent to the fingertips, from which a user could hear conversation by inserting a finger in the ear (22). This catalyzed research around novel aural solutions and is leading to considerable innovation.

     
  
Figure xx: WhisperFinger (Source #22)
  
Figure xx: TEAC headset (Source #23)
TEAC has commerciaized the FillTune HP-F100 using “giant Magnetostrictive bone-conduction technology that vibrates the sense organ directly” to deliver high quality sound (23). Recent and ongoing research at Georgia Tech Sonification lab is working to combine spatial audio with bone conduction to provide superior 3D audio experiences (24).



Human Touch research topics:

Innovative hardware and method that can utilize the human body, clothes, jewelry, etc., to effectively create:
1.
Secure transmission medium from one entity (device/person) to another (device/person)
2.
On-body device interactivity with high degree of accuracy
3.
Bone conductive and other technologies to address or combat hearing loss, especially in the elderly and youth demographics






Non Display Interface

Background information and current technology development
The current interactive centerpiece of any mobile device is unquestionably the screen. Whether single or multiple screens, or the promise of flexible screens (9,10) , the larger the better. But the notion of requiring a screen at all has been challenged by the introduction of Apple’s Siri application on the iPhone. With various degrees of accuracy, users can use their natural voice to search content and issue commands without the need for any display at all.  In areas where speaking is strongly discouraged, voiceless input via sensors attached to the neck detect movement and neurological activity that the brain sends to the vocal cords have been demonstrated (11).
Similar to fingerprint and facial recognition as a biometric media for security applications, voice has inherent issues with regard to unacceptably high False Acceptance Rates (FAR) and False Rejection Rates (FRR). Advances in voice attribute tracking and ambient noise filtering; have overcome many of these inherent vocal and aural obstacles.
The implementation of olfactory (sense of smell) and gustatory (sense of taste) combinations as a communicative interface can also be utilized as non-conventional handset option. The teleportation of scents, or at least the ability to recreate scents remotely through an Olfactory Transmission Protocol (OTP) can manipulate the limbic system to bring a more psychological impact and better user experience if realized in a mobile configuration.

Non-Display Interface. Over the past several years, hundreds of companies have introduced text-to-speech (TTS) and speech-to-text (STT) applications in a variety of languages, all with varying degrees of success, many of which require training. Vlingo with its “intent engine” (25), and Voicebox’s Voice Search (26) use sophisticated algorithms to offer cloud-assisted voice-to-text search as a natural interface for devices. Both generally work well to filter out noise while accurately returning requested content from natural speech input. In the security realm, VoiceVault has developed algorithms to detect more than 100 physiological metrics (nasal cavity size, etc.) to determine the authenticity of the user. Minor colds and other ailments that affect voice quality have been taken into account to provide unparalleled accuracy.   A plethora of other solutions including VoicePrism (cognitive and emotional component extraction from voice), Nemesysco (vocal emotion detection and analyzer) round out a variety of vocal solutions associated with current handsets (27,28,29).  


   
                 
Figure xx: VoicePrism voice analyzer (Source #28)       Figure xx: MEMs microphones (Source #31)    Figure xx:  Audeo voiceless (Source #11)   

For any voice solution to work well, good audio input relies on audio processing, bandwidth, network quality, and quality components. Outdated ECM microphones are being replaced by silicon MEMS microphones that are thinner, lighter, less susceptible to mechanical vibration, electronic interference, and temperature variations. They can also significantly enhance ambient noise suppression, and is why Apple has begun incorporating them in newer iPhone and iPad models (30). As forerunners in this arena, Akustica and Analog Devices have begun gaining attention from Samsung and other tablet makers (31,32).

For sub-vocal speech, developers at the Texas Instruments Developers Conference demonstrated Ambient Audeo System, a wireless sensor worn around the neck that captures neurological activity the brain sends to the vocal cords when someone attempts to speak (11).

For some, the olfactories are the most sensitive of senses, but unfortunately are underutilized in communication. Attempts at pairing the computing world to scents hold some promise. Sony Ericsson marketed handsets equipped with small scented sheets that provided various aromatherapy enjoyment. NTT Lab’s  iAroma “fragrance communication system” tested a device  that uses different oil-based cartridges from which the contents are combined and vaporized to release a number of different fragrances (33).  

Figure xx: iAroma (Source #33)   


Non Display Interface research topics:
New, low-cost, more reliable technologies that can improve input primarily through non-visual means:
1.
Novel MEMs designs to better capture sound, light, motion, temperature, odor, humidity, etc.
2.
On-body sensors to more fully integrate input options
3.
Improvements in sub-vocal “voiceless” input




Assistance for Senior Citizen, Physically Challenged and Children


Background information and current technology development

The handset has transitioned from novelty item to a very real extension of a person’s identity. At the same time, demographics of users are changing and include a larger subset of the population. Carriers dream of increasing their subscriber base, but have not yet contemplated how to cater to the disparate needs of each user group. Senior citizens generally have a different set of requirements from younger earlier adopters, or require feature sets that make health issues more manageable. Sadly, the most common feature set of targeted phones for the elderly includes rudimentary changes - larger buttons, larger fonts, and quick dialing.
An early example of a targeted device is the assistive technology incorporated in the print-to-speech reader released by the National Federation for the Blind (12), cleverly incorporated into certain Nokia handsets. To keep individuals from harm’s way, a more recent research project employ Galvanic Vestibular Stimulation (GVS), an electrical stimulation technique, that can provide perceptual force to remotely control (or throw-off) the balance of individuals by placing electrodes on the mastoid (13).
BCI and Voiceless input in are strong components for handset-based assistive technologies as well. The latter may allow those with speech impediments to interact in a more comfortable manner and even participate in activities that use voice (e.g. silent Karaoke).
Feedback and concern from users prompted one carrier to announce the development of smartphone jackets that can measure a number of parameters. Handsets can be slipped into specific shells enabled with sensors to measure body fat/muscle mass, halitosis and alcohol levels, and even UV radiation risks (14).
Promises of robust healthcare applications have been sounded for the past decade, but industry movement indicates it is ready for low-cost, non-invasive, remote managed care. Tremendous opportunities exist here. Galvanic Skin Response (GSR) sensors that can measure skin conductivity highly sensitive to emotion. Measurement of such parameters may provide better indication of a person’s fear, anger, restlessness, and other emotional states that can signal underlying benign or acute concerns, such as Post Traumatic Stress Disorder (PTSD). The obesity epidemic, where CDC Research found that over 30% of adults and nearly 17% of children in the US are obese (15), and Sleep Apnea, a widely known but overlooked issue, are two areas that can and should be addressed by advances in mobile technology, where a combination of integrated biosensors could further personalize the monitoring required for specific users to address their health concerns in a private, non-invasive way.

Figure 1: Monitoring Obesity (Source #15)

Displays are standard issue on smartphones and have proven to be a reliable and adequate (if not power hungry) interface, and will continue to be a critical component of future handset designs. With handset real estate at a premium, however, new designs will need to provide more value add by making visual and haptic input easier and more resourceful.  To this end, Peratech has developed Quantum Tunnelling Composite (QTC) that adds force pressure sensing to screens. The degree of pressure applied determines the resistance in the sensor, which can control, for instance, scrolling speed. This can offer more intuitive ways to interact (34). Senseg offers technologies that make the screen “come alive” with textures and contour without the use of motors, thereby reducing power consumption (35). Where vibration technology only provide haptic feedback after input, Tactus Technology believes the instantaneous creation of buttons on the screen is more advantageous because it offers both orientation and confirmation (36).


           
Figure xx: Peratech QTC (Source 35)   
Figure xx: Senseg Haptic sheets (Source 34)   
     Figure xx: TactusTech  (Source 36)
Modu Mobile (recently acquired by Google) allowed users to personalize their mobile looks and features by inserting the module handset into a range of unique “jackets” specific to the situation. By building a jacket ecosystem, these jackets became the centerpiece of the interactive experience with the handset but offer an interesting opportunity to customize according to desire (37). This is similar to Bug Labs’ modular approach for creating a rapid plug-and-play hardware and software development environment (38).


Figure xx: Modu Mobile jackets (Source 37)   
Figure xx: Bug Lab’s modular concept (Source 38)   
Personal healthcare is a ripe area for mobile handset innovation, as has been highlighted at recent CES shows. ComScore and the PEW Research Center recently reported the number of people accessing health information by handsets is increasing dramatically (39). Researchers at UCLA and California NanoSystems Institute developed LUCAS (Lensless Ultrawide-field Cell monitoring Array platform based on Shadow) which shows promise as a “holographic microscope that can be attached to a cell phone’s camera…for diagnosing malaria or other blood-borne illnesses in isolated places,” and indicates the power and convenience of a handset to augment primary health services (40). In the paper “A Touchscreen as a Biomolecule Detection Platform, authors Byoung Yeon Won and Hyun Gyu Park introduce a “biomolecular detection platform that utilizes a capacitive touchscreen to measure DNA concentration leading to personalized portable biosensors.”(41)


Figure xx: LUCAS: (Source #40)
Figure xx: Touchscreen biosensor (Source #41)


Assistance for Senior Citizen, Physically Challenged, and Children research topics:

Handset solutions that allow for more intuitive input and access to information for a variety of population segments:
1.
Technologies that allow better mobility
2.
Assistance for better and more accurate health assessment
3.
Easier to use modular components for easier personalization of devices
4.
Infrastructure consideration that provides safer access routes before/after natural disasters





Improve Situational Awareness


Background information and current technology development


Because the mobile handset has become an extension of the ourselves, it is easy to forget about the world outside our personal bubble. Technologies that improve the situational awareness during handset use are clearly necessary. Everyone has experienced when an individual is so deeply engrossed in a handset that they cannot walk straight or otherwise function normally. It is akin to talking while driving and texting, where study after study indicate humans are amazingly poor at multitasking. The constant need to remain connected impairs our ability to function while driving, riding bicycles, even walking, and places our lives in constant danger when unable to pull our eyes away from the screen.  Walking behind others fixated on their displays (texting, gaming, obtaining location information) who suddenly stop or change directions can be just as perilous, according to research by Katsumi Tokuda at Tsukuba University (16). The shift from clamshell to flat smartphone designs was a likely cause for an increase in accidents due to the angle at which a user’s eyes view the display. He showed that 47% of senior citizens over the age of 70, and 42% of mothers walking with children have been hit by smartphone gazers.  Professor Kazuhiko Kozuka at Aichi Institute of Technology noted that those who gazed at displays with constantly changing content were less aware of their surroundings than people viewing pamphlets or other static content (17).  The device people carry to assist in the event of an accident is essentially the device that is promoting it.  Sadly, the most at risk tend to be the elderly, kids, and those with disabilities as well.

Figure 2: Display focus (Source #17)
Technologies that improve situational awareness are direly needed.   Previously mentioned GVS as a human interface can also assist by “steering” or “nudging” individuals away from areas of danger (potholes, telephone poles, oncoming cars).

Technologies that improve situational awareness are direly needed.   Unique implementations of Galvanic Vestibular Stimulation by T. Amemiya (13) showed that through a haptic directional indicator individuals can be “steered” or “nudged” remotely. This could conceptually keep pedestrians away from areas of danger (potholes, telephone poles, oncoming cars), assist the blind, lead people to safety or along a prescribed path. Such input could also provide a more realistic gaming experience (eg. simulation of G forces).

     
      
  Figure xx: Haptic Direction Indicator (Source #13)
Figure xx: SIGGRAPH 2005 Human Remote Control (Source #13)  
The introduction of more transparent displays could lend a hand in improving situational awareness. By being able to see what is approaching, handset gazers would be able distinguish what is on the other side of the display. Ongoing OLED and LCD research by Japanese (Hitachi, Mitsubishi), Korean (Samsung, LG) and Chinese (Haier) for use in monitors and laptop displays will likely trickle down to the handset after BoM and yield considerations are addressed.


Figure xx: Transparent OLED Displays (Source #42)

Improve Situational Awareness research topics:

Novel ways to improve awareness of a user’s surroundings and reduce situational blindness:
1.
Sensor implementations that allow for guidance while walking or conducting other activities
2.
Improved GVS or other mechanisms that can steer or nudge users out of harm’s way.
3.
Combination of sensors that allow multimodal input
4.
Sensory feedback  based on user environment


References:
(1)
NPD: Tablet numbers to nearly equal notebooks by 2017 http://www.electronista.com/arti ... 3m.tablets.in.2017/
(2)
Sherry Turkle, MIT Sociologist;  http://www.mit.edu/~sturkle
(3)
Brain Computer Interface (BCI);  http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface; http://eprints.pascal-network.or ... eKraBlaRaoMue06.pdf
(4)
IBM: Mind reading is less than five years away;  http://www.datamation.com/news/i ... ive-years-away.html
(5)
Red Tacton;  http://en.wikipedia.org/wiki/RedTacton; http://www.seminarprojects.com/T ... network-full-report
(6)
Skinput, Microsoft Research; http://research.microsoft.com/en ... groups/cue/Skinput/
(7)
Hearing Loss in Teens on the Rise;  http://children.webmd.com/news/2 ... eens-is-on-the-rise
(8)
Teens set for Early Hearing Loss; http://www.tgdaily.com/consumer- ... -early-hearing-loss
(9)
Polymer Vision 6” SVGA rollable display with full color; http://www.youtube.com/watch?v=xxhCiLvi5LI&feature=youtu.be
(10)
Samsung CES AMOLED display prototype http://www.youtube.com/watch?v=qsIjfy8g2Pk&feature=endscreen&NR=1
(11)
Ambient Co., Audeo: http://computer.howstuffworks.com/audeo1.htm; http://www.dailymotion.com/video/x4p2ww_ambient-co-audeo_news ;  http://www.theaudeo.com/new%20website/?action=technology
(12)
KNFB Reading Technology, National Federation for the Blind; http://www.knfbreader.com
(13)
Human interface applying galvanic vestibular stimulation ; T. Maeda, H. Ando, Tomohiro Amemiya, M. Inami, N. Nagaya, M. Sugimoto, "Shaking The World: Galvanic Vestibular Stimulation As A Novel Sensation Interface", ACM SIGGRAPH 2005 Emerging Technologies, 2005. http://www.brl.ntt.co.jp/people/t-amemiya/research.html
(14)
NTT DoCoMo; Smartphone Jackets; http://www.nttdocomo.com/pr/2011/001549.html
(15)
CDC Research: Obesity epidemic; http://www.msnbc.msn.com/id/46027230/ns/health-diet_and_nutrition
(16)
Katsui Tokuda, Professor of Disability Cognition and Welfare Sociaology, Tsukuba University; http://www.j-cast.com/tv/2011/10/07109420.html?p=all (Japanese only)
(17)
Kauzuhiro Kozuka, Professor of Media Informatics, Aichi University of Technology; http://www-in.aut.ac.jp/~kozuka/ ; http://www.j-cast.com/tv/2011/10/07109420.html?p=all (Japanese only)
(18)
Neurosky, Inc.; http://neurosky.com
(19)
Emotiv Systems, Inc.; http://emotiv.com/
(20)
Necomimi by Neurowear; http://neurowear.com/
(21)
Toyota and RIKEN Wheelchair controlled by brainwaves; http://www.gizmag.com/toyota-whe ... -brain-waves/12121/
(22)
FingerWhisper Bone Conduction;  http://www.gizmag.com/go/2434/
(23)
TEAC FillTune; http://www.teac.com/data_storage/headphones/filltune_hp-f100/
(24)
Georgia Tech Sonification lab; http://sonify.psych.gatech.edu/research/bonephones/
(25)
Vlingo; http://www.vlingo.com/
(26)
VoiceBox; http://www.voicebox.com/
(27)
VoiceVault; http://www.voicevault.com/
(28)
VoicePrism; http://voiceprism.com/
(29)
Nemesysco; http://www.nemesysco.com/
(30)
Apple becomes World Largest consumer of MEMS microphones; http://www.macworld.co.uk/ipad-iphone/news/?newsid=3331260
(31)
Akustica;  http://www.akustica.com/
(32)
Analog Devices; http://www.analog.com/en/audiovi ... products/index.html
(33)
NTT iAroma; http://www.fastcompany.com/blog/ ... ternet-comms-device
(34)
Peratech ; http://www.peratech.com/qtctechnology.php
(35)
Senseg; http://senseg.com/
(36)
Tactus Technology; http://www.tactustechnology.com/
(37)
Modu Mobile; http://en.wikipedia.org/wiki/Modu
(38)
Bug Labs Open Source HW and SW; http://www.buglabs.net/
(39)
ComScore: Number accessing health information via phones doubles; http://mobihealthnews.com/15905/ ... h-info-via-mobiles/
(40)
LUCAS: ‘Killer App’ That Could Help Save Lives http://edition.cnn.com/2011/TECH ... .malaria/index.html
(41)
A Touchscreen as a Biomolecule Detection Platform; http://onlinelibrary.wiley.com/d ... .201105986/abstract
(42)
Samsung; http://www.samsung.com
(43)
Fatal Distraction: Death of headphone wearing pedestrians on the rise; http://usnews.msnbc.msn.com/_new ... strians-on-the-rise






本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?注册 新浪微博登陆


来,缓和一下你们看完连载的心情
然后来好好推理犯人和手法把-w-

月光下的魔术师

美术部荣誉版主

15

主题

5

好友

1033

积分

 

升级
44%
帖子
1651
精华
16
积分
1033
威望
140
RP
2192
金钱
2969 柯币
人气
895 ℃
注册时间
2003-11-16
沙发
发表于 2009-3-14 00:00:17 |只看该作者
果然还是谁谁的风格清爽啊~
回复

使用道具 举报

平成的福尔摩斯

9

主题

4

好友

1582

积分

 

升级
48%
帖子
4089
精华
5
积分
1582
威望
362
RP
2947
金钱
7733 柯币
人气
2123 ℃
注册时间
2005-9-17
板凳
发表于 2009-3-14 00:41:57 |只看该作者
以后的随笔涂鸦都扔上来吧,,,,这次这个明显就是谁谁的第二画风嘛(啥),,,拿画板的5头身恩
,,,,,还有,,,,,你的发帖时间越来越像银翼的天使一样诡异了XDDDDDDDDDD
何时才能再争取到一段较长的玩乐时间呢
回复

使用道具 举报

淡淡的薰衣草
事务所资源补档组荣誉

18

主题

0

好友

2624

积分

 

帖子
4743
精华
9
积分
2624
威望
981
RP
3990
金钱
47735 柯币
人气
840 ℃
注册时间
2008-3-31
地板
发表于 2009-3-14 00:56:11 |只看该作者
厉害~~随手都能画的那么漂亮~
回复

使用道具 举报

平成的福尔摩斯

美术部荣誉版主

0

主题

0

好友

1624

积分

 

升级
53%
帖子
894
精华
10
积分
1624
威望
147
RP
3671
金钱
9672 柯币
人气
-32 ℃
注册时间
2007-7-26
5
发表于 2009-3-14 08:44:21 |只看该作者
很喜欢谁谁的上色~ ><
回复

使用道具 举报

名侦探

1

主题

11

好友

374

积分

 

升级
10%
帖子
9587
精华
3
积分
374
威望
126
RP
381
金钱
2489 柯币
人气
85 ℃
注册时间
2008-10-11
6
发表于 2009-3-14 10:07:02 |只看该作者
可爱可爱哈~
LZ的画总是很童话呢
回复

使用道具 举报

最后的银色子弹

音乐厅荣誉版主
十周年活动主持人
月刊文编
广播站编辑&主持
资源情报科成员

63

主题

0

好友

2963

积分

 

帖子
6286
精华
44
积分
2963
威望
575
RP
5813
金钱
23964 柯币
人气
1234 ℃
注册时间
2008-4-27
7
发表于 2009-3-14 17:50:39 |只看该作者
为啥随手画都如此美好~!!!!【指!】
眼神好无辜哦~!!
回复

使用道具 举报

杯户大学生

65

主题

34

好友

55

积分

 

升级
38%
帖子
4356
精华
28
积分
55
威望
0
RP
29
金钱
7 柯币
人气
130 ℃
注册时间
2005-1-29
8
发表于 2009-3-14 19:44:59 |只看该作者
过来膜拜下谁谁~~
回复

使用道具 举报

平成的福尔摩斯

美术馆荣誉版主
事务所资源补档组荣誉

165

主题

25

好友

1768

积分

 

升级
71%
帖子
7410
精华
19
积分
1768
威望
787
RP
2266
金钱
57684 柯币
人气
4435 ℃
注册时间
2008-8-14
9
发表于 2009-3-14 21:21:22 |只看该作者
可怜~~送别还是失恋??
回复

使用道具 举报

杯户中学生

0

主题

0

好友

29

积分

 

升级
72%
帖子
561
精华
0
积分
29
威望
1
RP
55
金钱
121 柯币
人气
74 ℃
注册时间
2008-11-30
10
发表于 2009-3-15 11:13:17 |只看该作者
LZ画的很好啊~
回复

使用道具 举报

最后的银色子弹

7

主题

71

好友

3572

积分

 

帖子
86878
精华
3
积分
3572
威望
681
RP
5055
金钱
13193 柯币
人气
394 ℃
注册时间
2008-12-11
11
发表于 2009-3-15 18:16:21 |只看该作者
随手画的都如此美好........
色彩和光感太棒了.........
永远是蜗蜗的树树  永远是莫名的娃儿
回复

使用道具 举报

侦探

0

主题

50

好友

293

积分

 

升级
33%
帖子
6811
精华
1
积分
293
威望
44
RP
453
金钱
1845 柯币
人气
233 ℃
注册时间
2008-6-1
12
发表于 2009-3-17 18:36:45 |只看该作者
谁谁大人的图不用说就是好...

继续膜拜...
这颜色真是太...
嗯嗯.
不知道怎么表达了...
熄灯之前有犬吠。十二点左右有猫叫。如果是四点多醒来可以听到鸡鸣声。
学校真美好。
回复

使用道具 举报

侦探

2

主题

2

好友

345

积分

 

升级
85%
帖子
354
精华
10
积分
345
威望
41
RP
750
金钱
550 柯币
人气
196 ℃
注册时间
2007-11-24
13
发表于 2009-3-18 00:42:03 |只看该作者
那个、、、好想帮她擦眼泪-_,-
回复

使用道具 举报

见习侦探

35

主题

39

好友

203

积分

 

升级
29%
帖子
3255
精华
2
积分
203
威望
11
RP
398
金钱
1257 柯币
人气
474 ℃
注册时间
2008-12-6
14
发表于 2009-3-18 11:57:56 |只看该作者
偶看得都心酸了
回复

使用道具 举报

见习侦探

5

主题

1

好友

235

积分

 

升级
69%
帖子
469
精华
5
积分
235
威望
41
RP
473
金钱
476 柯币
人气
489 ℃
注册时间
2007-5-13
15
发表于 2009-3-18 19:46:40 |只看该作者
如此阳光明媚的天气,笑一个嘛!嘻嘻!:-D
云自无心水自闲
回复

使用道具 举报

杯户大学生

7

主题

0

好友

57

积分

 

升级
43%
帖子
1210
精华
1
积分
57
威望
4
RP
103
金钱
186 柯币
人气
275 ℃
注册时间
2007-8-25
16
发表于 2009-3-22 00:06:46 |只看该作者
厉害的作品
膜拜
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 注册 新浪微博登陆

手机版|Archiver|名侦探柯南事务所 ( 沪ICP备17027512号 )

GMT+8, 2024-5-16 07:30 , Processed in 0.052073 second(s), 31 queries , MemCached On.

Powered by Discuz! X2.5

© 2001-2012 Comsenz Inc.

回顶部