ronaldweinland.info Tutorials SKINPUT TECHNOLOGY SEMINAR REPORT PDF

SKINPUT TECHNOLOGY SEMINAR REPORT PDF

Monday, July 22, 2019 admin Comments(0)

The Microsoft company have developed Skinput, a technology that appropriates the human body for acoustic transmission, allowing the skin to. Skinput Technology Seminar Report for Electronics & Electrical Download. Latest IEEE Seminars for EEE and ECE also available. abstract on the seminar topic skinput technology. SKINPUT TECHNOLOGY ABSTRACT Skinputis a technology thatappropriatesthe human.


Author:SHAUN LINDENBERG
Language:English, Spanish, French
Country:San Marino
Genre:Art
Pages:257
Published (Last):14.01.2016
ISBN:224-3-66072-528-7
ePub File Size:20.46 MB
PDF File Size:18.49 MB
Distribution:Free* [*Register to download]
Downloads:42048
Uploaded by: KARON

Skinput, Ask Latest information, Abstract, Report, Presentation (pdf,doc,ppt) seminar, project idea, seminar topics, project, project topics,latest technology, idea. Skinput Full seminar reports, pdf seminar abstract, ppt, presentation, project idea, latest technology details, Ask Latest information. Skinput Report - Download as Word Doc .doc), PDF File .pdf), Text File .txt) or read Seminar Report Technology Skinput SKINPUT TURNS OUR BODY .

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy. See our Privacy Policy and User Agreement for details. Published on Aug 25, SlideShare Explore Search You.

Similarly joints play an important role in making tapped locations acoustically distinct. Bones are held together by ligaments, and joints often include additional biological structures such as fluid cavities. This makes joints behave as acoustic filters. In some cases, these may simply dampen acoustics; in other cases, these will selectively attenuate specific frequencies, creating location specific acoustic signatures. SENSING To capture the rich variety of acoustic information we evaluated many sensing technologies, including bone conduction microphones, conventional microphones coupled with stethoscopes , piezo contact microphones and accelerometers.

However, these transducers were engineered for very different applications than measuring acoustics transmitted through the human body. Most of the mechanical sensors are engineered to provide relatively flat response curves over the range of frequencies that is relevant to our signal.

This is a desirable property for most applications where a faithful representation of an input signal uncolored by the properties of the transducer is desired.

However, because only a specific set of frequencies is conducted through the arm in response to tap input, a flat response curve leads to the capture of irrelevant frequencies and thus to a high signal- to-noise ratio. While bone conduction microphones might seem a suitable choice for Skinput, these devices are typically engineered for capturing human voice, and filter out energy below the range of human speech whose lowest frequency is around 85Hz.

Thus most sensors in this category were not especially sensitive to lower-frequency signals e. To overcome these challenges, we moved away from a single sensing element with a flat response curve, to an array of highly tuned vibration sensors. Specifically, we employ small piezo film.

By adding small weights to the end Seminar Report Technology Skinput of the cantilever, we are able to alter the resonant frequency, allowing the sensing element to be responsive to a unique, narrow, low-frequency band of the acoustic spectrum.

Adding more mass lowers the range of excitation to which a sensor responds. Additionally, the cantilevered sensors were naturally insensitive to forces parallel to the skin e. Thus, the skin stretch induced by many routine movements e. However, the sensors are highly responsive to motion perpendicular to the skin plane perfect for capturing transverse surface waves Figure 2 and longitudinal waves emanating from interior structures Figure 3.

Finally, our sensor design is relatively inexpensive and can be manufactured in a very small form factor , rendering it suitable for inclusion in future mobile devices e.

The decision to have two sensor packages was motivated by our focus on the arm for input.

Course Finder

When placed on the upper arm above the elbow , we hoped to collect acoustic information from the fleshy bicep area in addition to the firmer area on the underside of the arm, with better acoustic coupling to the Humerus, the main bone that runs from shoulder to elbow. When the sensor was placed below the elbow, on the forearm, one package was located near the Radius, the bone that runs from the lateral side of the elbow to the thumb side of the wrist, and the other near the Ulna, which runs parallel to this on the medial side of the arm closest to the body.

Figure 5 Seminar Report Technology Skinput Each location thus provided slightly different acoustic coverage and information helpful in disambiguating input location. We tuned the upper sensor package to be more sensitive to lower frequency signals, as these were more prevalent in fleshier areas. Conversely, we tuned the lower sensor array to be sensitive to higher frequencies, in order to better capture signals transmitted though denser bones.

Each channel was sampled at 5. This reduced sample rate and consequently low processing bandwidth makes our technique readily portable to embedded processors. Data was then sent from our thin client over a local socket to our primary application, written in Java.

This program performed three key functions. First, it provided a live visualization of the data from our ten sensors, which was useful in identifying acoustic features Figure 6Second, it segmented inputs from the data stream into independent instances. Third, it classified these input instances. Seminar Report Technology Skinput The audio stream was segmented into individual taps using an absolute exponential average of all ten channels Figure 6, red waveform. When an intensity threshold was exceeded Figure 6, upper blue line , the program recorded the timestamp as a potential start of a tap.

If the intensity did not fall below a second, independent closing threshold Figure 6, lower purple line between ms and ms after the onset crossing a duration we found to be the common for finger impacts , the event was discarded.

If start and end crossings were detected that satisfied these criteria, the acoustic data in that period plus a 60ms buffer on either end was considered an input event Figure 6, ver tical green regions.

Figure 6 Although simple, this heuristic prove to be highly robust, mainly due to the extreme noise suppression provided by our sensing approach. After an input has been segmented, the waveforms are analyzed.

Skinput Report

The highly discrete nature of taps i. Signals simply diminished in intensity overtime. Thus, features are computed over the entire input window and do not capture any temporal dynamics. We employ a brute force machine learning approach, computing features in total, many of which are derived combinatorially. Our software uses the implementation provided in the Weka machine learning toolkit. Other, more sophisticated classification techniques and features could be employed. Before the SVM can classify input instances, it must first be trained to the user and the sensor position.

This stage requires the collection of several examples for each input location of interest. When using Skinput to recognize live input, the same acoustic features are computed on-the fly for each segmented input. These are fed into the trained SVM for classification. We use an event model in our software once an input is classified, an event associated with that location is instantiated.

Any interactive features bound to that event are fired. We readily achieve interactive speeds. These participants represented a diverse cross-section of potential ages and body types. Ages ranged from 20 to 56 mean These groupings, illustrated in Figure 7, are of particular interest with respect to interface design, and at the same time, push the limits of our sensing capability.

From these three groupings they derived five different experimental conditions, described below. Foremost, they provide clearly discrete interaction points, which are even already wellnamed e. In addition to five finger tips, there are 14 knuckles five major, nine minor , which, taken together, could offer 19 readily identifiable input locations on the fingers alone.

Second, we have exceptional finger-to-finger dexterity, as demonstrated when we count by tapping on our fingers. Finally, the fingers are linearly ordered, which is potentially useful for interfaces like number entry, magnitude control e. At the same time, fingers are among the most uniform appendages on the body, with all but the thumb sharing a similar skeletal and muscular structure.

This drastically reduces acoustic variation and makes differentiating among them difficult. Additionally, acoustic information must cross as many as five finger and wrist joints to reach the forearm, which further dampens signals. For this experimental condition,they thus decided to place the sensor arrays on the forearm, just below the elbow.

Despite these difficulties there may have measureable acoustic differences among fingers, which is primarily related to finger length and thickness, interactions with the complex structure of the wrist bones, and variations in the acoustic transmission properties of the muscles extending from the fingers to the forearm.

They selected these locations for two important reasons. First, they are distinct and named parts of the body e. This allowed participants to accurately tap these locations without training or markings.

Pdf seminar report skinput technology

Additionally, these locations proved to be acoustically distinct during piloting, with the large spatial spread of input points offering further variation. The team used these locations in three different conditions.

One condition placed the sensor above the elbow, while another placed it below. This was incorporated into the experiment to measure the accuracy loss across this significant articulation point the elbow.

Additionally, participants repeated the lower placement condition in an eyes-free context: participants were told to close their eyes and face forward, both for training and testing.

This Seminar Report Technology Skinput condition was included togauge how well users could target on-body input locations in an eyesfree context. Not only was this a very high density of input locations unlike the whole-arm condition , but it also relied on an input surface the forearm with a high degree of physical uniformity unlike, e. This location was compelling due to its large and flat surface area, as well as its immediate accessibility, both visually and for finger input.

Simultaneously, this makes for an ideal projection surface for dynamic interfaces. To maximize the surface area for input, they placed the sensor above the elbow, leaving the entire forearm free. Rather than naming the input locations, as was done in the previously described conditions, they employed small, coloured stickers to mark input targets.

Technology report pdf seminar skinput

This was both to reduce confusion since locations on the forearm do not have common names and to increase input consistency. They consider the forearm is ideal for projected interface elements; the stickers served as low-tech placeholders for projected buttons. DESIGN AND SETUP They employed a within-subjects design, with each participant performing tasks in each of the five conditions in randomized order: five fingers with sensors below elbow; five points on the whole arm with the sensors above the elbow; the same points with sensors below the elbow, both sighted and blind; and ten marked points on the forearm with the sensors above the elbow.

Skinput Technology Seminar Report

Participants were seated in a conventional office chair, in front of a desktop computer that presented stimuli. For conditions with sensors below the elbow, they placed the armband 3cm away from the elbow, with one sensor package near the radius and the other near the ulna. For conditions with the sensors above the elbow, they placed the armband 7cm above the elbow, such that one sensor package rested on the biceps. Right-handed participants had the armband placed on the left arm, which allowed them to use their dominant hand for finger input.

For the one left-handed participant, we flipped the setup, which had no apparent effect on the operation of the system. Tightness of the armband was adjusted to be firm, but comfortable. While Seminar Report Technology Skinput performing tasks, participants could place their elbow on the desk tucked against their body, or on the chairs adjustable armrest; most chose the latter.

Participants practiced duplicating these motions for approximately one minute with each gesture set. This allowed participants to familiarize themselves with our naming conventions and to practice tapping their arm and hands with a finger on the opposite hand. It also allowed to convey the appropriate tap force to participants, who often initially tapped unnecessarily hard.

To train the system, participants were instructed to comfortably tap each location ten times, with a finger of their choosing. This constituted one training round. In total, three rounds of training data were collected per input location set. An exception to this procedure was in the case of the ten forearm locations, where only two rounds were collected to save time 20 examples per location, data points total. Total training time for each experimental condition was approximately three minutes.

They used the training data to build an SVM classifier. During the subsequent testing phase, they presented participants with simple text stimuli e. The order of stimuli was randomized, with each location appearing ten times in total.

The system performed real-time segmentation and classification, and provided immediate feedback to the participant e. They provided feedback so that participants could see where the system was making errors as they would if using a real application.

If an input was not segmented i. Overall,segmentation error rates were negligible in all conditions, and not included in further analysis. In the first additional experiment, which tested performanceof the system while users walked and jogged, they recruited one male age 23 and one female age 26 for a single-purpose experiment.

For the rest of the experiments, they recruited seven new participants 3 female, mean age In all cases, the sensor armband was placed just below the elbow. Similar to the previous experiment, each additional experiment consisted of a training phase, where participants provided between 10 and 20 examples for each input type, and a testing phase, in which participants were prompted to provide a particular input ten times per input type. As before, input order was randomized; segmentation and classification were performed in real-time.

In regard to bio-acoustic sensing, with sensors coupled to the body, noise created during other motions is particularly troublesome, and walking and jogging represent perhaps the most common types of whole-body motion.

Download the Seminar Report for Skinput

This experiment explored the accuracy of our system in these scenarios. Each participant trained and tested the system while walking and jogging on a treadmill.

Three input locations wereused to evaluate accuracy: arm, wrist, and palm. Additionally,the rate of false positives i. The testing phase took roughly three minutes to complete four trials total: two participants, two conditions. The male walked at 2. In both walking trials, the system never produced a false positive input. Classification accuracy for the inputs e. In the jogging trials, the system had four false-positive input events two per participant over six minutes of continuous jogging.

Considering that jogging is perhaps the hardest input filtering and segmentation test, we view this result as extremely positive. Classification accuracy, however, decreased to Seminar Report Technology Skinput Although the noise generated from the jogging almost certainly degraded the signal and in turn, lowered classification accuracy , the chief cause for this decrease was the quality of the training data.

Participants only provided ten examples for each of three tested input locations. The training examples were collected while participants were jogging. Thus, the resulting training data was not only highly variable, but also sparse neither of which is conducive to accurate machine learning classification.

More rigorous collection of training data could yield even stronger results. This work did not evaluate classification accuracy. The feature of providing input using this projection is a boost. Simply, Skinput is the latest technology of providing input using the projection on the human body. The term Skinput is the combination of the words skin and input that implies the functionality of the technology —input thru skin.

How Does Skinput Technology Work? This band has an acoustic detector, which decides the part of the display that has to be activated by the touch. The scientists explain that the changes in bone density, mass, and size along with the filtering effects from joints and soft tissues mean various skin spots are acoustically discrete. The software goes with sound frequencies to particular skin spots, permitting the system to decide the skin button pressed by the user. Following this, the prototype system makes use of a wireless technology like Bluetooth to send the commands to the device that is controlled ex: Skinput thus functions by listening to vibrations of the hum body.

And, this is indeed a groundbreaking concept in itself. Audio player strapped to the arm that can be controlled using finger tips. Computing devices to computer without the need of some external application. Portable computing devices with a pico projector. Reference Lniks: