Showing posts with label Safe Technologies. Show all posts
Showing posts with label Safe Technologies. Show all posts

Wednesday, 29 August 2018

Cane roller for visually impaired is designed for exploring virtual worlds-Navigation Abilities

Oh the wonder of it all. Virtual reality is opening us up to experiences that set our imaginations and curiosity on fire to explore the unknown, the untried, in full motion. Beyond headsets and controllers and screens, the fundamental enabler is our eyes, as we step down, leap up, walk through new worlds. But wait. What if you are visually impaired?
Assumption: Virtual reality is unexplorable, so forget about it. Unless—can VR be experienced by vision-impaired people?
Microsoft Research looked for answers, working on a system whereby exploring and understanding unfamiliar virtual spaces could be made possible for the visually impaired.
"Working with interns Yuhang Zhao from Cornell University and Cindy Bennett from the University of Washington, said the Microsoft Research blog, "Microsoft Research developed the Canetroller prototype to enable people who are skilled white cane users in the real world to transfer their navigation abilities into virtual settings."
Their haptic controller simulates the interaction of a white cane as the blind person attempts to navigate a virtual space using their already existing orientation and mobility skills.
The team's approach actually involves both a haptic and auditory cane simulation.
When the virtual cane hits on a virtual object, the brake stops the controller from moving. The voice coil kicks in, to generate a vibration simulating the high frequency vibration when a cane hits a real object. A 3-D spatial sound is also provided. The controller is paired with an HTC Vive headset for tracking head position and delivering 3-D spatial audio through headphones.
The voice coil can also simulate ground texture when the cane is sweeping the ground.
All in all, there are five parts to this system: 1. braking mechanism anchored on the waist 2. The hand-held cane controller 3. slider connecting the brake and controller 4. voice coil mounted on the trip of the cane controller to generate vibrotactile sensations and 5. HTC Vive tracker on the controller to track the controller's movement.
How well does their system work? They conducted a "usability" study.
Participants were asked to experience a room with four walls, carpet, door, table, and trashcan. Eight out of nine participants could understand the layout and could locate the position of all virtual objects by using the cane controller—after a few minutes of practice.
The outdoor test involved a sidewalk, curb with tactile domes, traffic light and street with cars passing. Participants could identify the objects, understand the flow of traffic, and were able to cross the street based on an audio signal from the traffic light.
The researchers mention a practical application of benefit, supporting orientation and mobility training. "The Canetroller enables novel scenarios such as new types of Orientation and Mobility training in which people can practice white cane navigation skills virtually in specific settings before travelling to a real-world location," said the Microsoft Research blog.
From a general technology perspective, the standout characteristic about their work lies in improved haptics. The dream is always having users experience the virtual world more naturally. That includes enabling users' finger and hands to have dynamic haptic feedback.
"The Microsoft Research team – Mike Sinclair, Christian Holz, Eyal Ofek, Hrvoje Benko, Ed Cutrell, and Meredith Ringel Morris – have been exploring ways existing technology can generate a wide range of haptic sensations that can fit within hand-held VR controllers, similar in look and feel to those currently used by consumers."
Christian Holz said, "What you really want is the impression of virtual shapes when you interact with objects in VR, not just the binaryfeedback you get from current devices."


Friday, 17 August 2018

Apple Watch, Fitbit can diagnose hypertension and sleep apnea: study-Cardiogram

A new study from the University of California, San Francisco and a health startup suggests that Apple Watch and Fitbit can accurately diagnose common health issues such as hypertension and sleep apnea.
The study published by the startup, Cardiogram, and UCSF Health Lab said hypertension and sleep apnea were diagnosed on wearables with 82 percent and 90 percent accuracy, respectively. Those rates are slightly lower than the rate for abnormal heart rhythm, which Cardiogram and UCSF diagnosed with 97 percent accuracy in a previous study from May.
Cardiogram - which is not affiliated with Apple or Fitbit - and UCSF determined accuracy levels by using artificial intelligence to pick up abnormal patterns in heart rate.
The study was conducted with more than 6,000 subjects, 37 percent and 17 percent who had hypertension and sleep apnea, respectively. The study will be subjected to months of peer-reviewed clinical research to validate the findings. Cardiogram says it plans to expand its studies into diagnosing pre-diabetes and diabetes.
"What if we could transform wearables people already own - Apple Watches, Android Wears, Garmins, and Fitbits - into inexpensive, everyday screening tools using artificial intelligence?" wrote Cardiogram co-founder Brandon Ballinger in a Medium post.
Hypertension, or high blood pressure, and sleep apnea, in which breathing repeatedly stops and starts during sleep, affect millions of Americans - most of whom do not know they have either disorders. More than 80 percent of Americans with sleep apnea are undiagnosed, according to the American Sleep Apnea Association.
More than 18 million Americans are estimated to have sleep apnea, but those with hypertension are far more prevalent. More than 75 million Americans - or 29 percent - have hypertension, according to the Center for Disease Control.
Hypertension and sleep apnea cost the United States $46 billion and $150 billion, respectively, in direct medical spending, lost productivity, and accidents, according to two separate studies by the CDC and American Academy of Sleep Medicine.
Apple and Fitbit have been actively looking into expanding their medical research into abnormal heart rhythm, hypertension and sleep apnea. Apple partnered with Stanford School of Medicine to study how Apple Watch can detect abnormal heart rhythm in its proprietary Health apps.
"One of the things that we've learned that we've been really surprised and delighted about is this device ... has essentially alerted people through the collection of the data that they have a problem," said AppleCEO Tim Cook in an interview with Fortune in August. "And that spurred them to go to the doctor and say, 'Look at my heart rate data. Is something wrong?' And a not-insignificant number have found out if they hadn't come into the doctor they would have died."
Fitbit for months has said it is focusing on sleep apnea. The company's new smartwatch, Ionic, has a new optical sensor to better collect data to diagnose sleep apnea.
In an interview with The Verge on Fitbit's sleep apnea efforts in August, Fitbit CEO James Park said the company will need to do many clinical trials to get its technology approval for future diagnoses.
"Diagnostics is a tricky term," said Park. "But definitely over time we hope to progress from screening in conjunction with a medical professional, to more diagnostics or treatment."
In September, both Apple and Fitbit were selected by the Food and Drug Administration to participate in a trial program allowing the companies to skip certain regulations to expedite innovation.


Monday, 6 August 2018

JavaScript for beginners: Grasshopper can teach coding-Puzzles and Quizzes


If youare new at coding and don't want to rearrange your life as a result—changing work hours, spending wads on formal courses—you may want to know about Grasshopper, a new way you can learn to write code on your phone.
The deal is this: A few taps on your Smart Phone and you are on your way to JavaScript. To get started you can head on over to the Google Play Store or iTunes App Store.
Grasshopper gets you on your coding way through puzzles and quizzes. This teacher app was launched through the Google incubator, Area 120, which is described as a workshop for experimental projects.
The coding app is for beginners and it is available for free on Android and iOS. Grasshopper's structure is such that it provides progressively challenging levels. In 9to5Google, Justin Duino said it was similar to "how apps like Duolingo teach you how to learn a foreign language."
He described what it is like after signing in. You are walked through the basics of programming and given several quizzes. Then comes more subject matter and exercises.
The App Store Preview remarks:
"The problem is that today's university-first approach is a bit old school, and frankly, out of touch. That's why Grasshopper offers a new kind of curriculum for the everyday coder."
If your learning tool can fit in your pocket, that implies it can fit in your lifestyle (do it on a work break or your train and bus commutes).
Grasshopper is an easy to remember and friendly name but the team called it as such for a different reason. The grasshopper's name is Grace which pays honor to Grace Hopper, an early pioneer in computer programming.
In turn, the team said Grasshopper comes from a passionate team concerned to help remove barriers to access to coding education.
Why JavaScript? JavaScript is a significantly popular programming language. "Grasshopper currently teaches using the popular programming language JavaScript, used by more than 70% of professional developers," said the Grasshopper team.
"When it comes to web development, JavaScript is always in the list of required skills, as it is one of the basic technologies for web development, just like HTML and CSS. Thus, JavaScript is eating the web development world," said Anastasia Stefanuk in Simple Programmer.
In further detail, according to descriptions, each course covers how code works, and it goes over animations, drawing shapes and creating more complex functions. One develops confidence to play around to build interactive animations.
Dani Deahl in The Verge called attention to another valuable aspect of Grasshopper—drawing on reward structures that many mobile games rely on. Deahl wrote, "there's also an achievements section within Grasshopper. Here, you can see how many concepts you've unlocked, the number of JavaScript keys you've used, and how many days long your current coding streak is."


Sunday, 29 July 2018

Offshore wind farm: First of 11 turbines goes up in Scotland initiative- Virtually Noiseless

Scotland is making wind energy news with an offshore wind project carrying high ambitions. The spotlight is on the European Offshore Wind Deployment Centre (EOWDC), an offshore wind test and demo facility. It is Scotland's largest, and it is being developed by the VattenFall-owned Aberdeen Offshore Wind Farm Limited.
Their project is all about 11 turbines—that is their scheme—in Aberdeen Bay. Once operational, it will be a boost to Aberdeen's global standing in energy innovation, supporters said.
The news is that one of these 11 turbines has already been erected. A video carried the announcement. It was a momentous day for the renewable energy industry in Scotland, said Adam Ezzamel, project director, for the EOWDC wind farm in Aberdeen Bay.
Ezzamel said that one rotation of this enormous structure was sufficient to power the average UK home for the entire day.
The other bit of news, as reported by David McPhee in Energy Voice: "Vattenfall are also claiming an industry breakthrough with the update of two of its turbines from 8.4MW to 8.8MW, which the company say is "first time" such a model has been "deployed commercially in the wind industry."
That upgrade raises the output of a completed wind farm to 93.2MW.
Chris Green, Scotland editor, i News, said according to developers, when the wind farm was fully operational, it would be able to provide the equivalent of over 70% of Aberdeen's domestic energy needs.
Vattenfall issued a press statement that the turbine was one of two turbines significantly enhanced with further power modes to generate more clean energy from the EOWDC. "The two turbines have each increased from 8.4MW to 8.8MW" and the installation "represents the first time an 8.8 MW model has been deployed commercially in the offshore wind industry."
McPhee, meanwhile, noted that the wind farm is using new suction bucket jackets embedded in the sand off Aberdeen—at commercial scale. Supporters say these can bring down the cost of offshore wind power. An article last year focused on the suction buckets on cutting costs and underwater noise. "Instead of monopiles, these giant upside-down buckets paired with jacket substructures will anchor the wind turbines to the seabed."
The piling method for offshore wind power foundations can cause a lot of noise and disturbance for sea mammals and fish and nearby coastal communities, said the article. Vattenfall adopted the suction-bucket technology—virtually noiseless.
"The suction bucket technology is well known in the oil and gas industry but this is the first time it will be used at a commercial scale in the offshore wind industry. Water is pumped out of the buckets, creating a pressure difference that forces the buckets into the seabed—when water is pumped out of the suction buckets, they sink in to the sea bed sediment. For decommissioning, water is pumped back in to retrieve the entire structure, said the article.


Saturday, 21 July 2018

Bento browser makes it easier to search on mobile devices-For iPhone

Searches involving multiple websites can quickly get confusing, particularly when performed on a mobile device with a small screen. A new web browser developed at Carnegie Mellon University now brings order to complex searches in a way not possible with conventional tabbed browsing.
TheBento browser, inspired by compartmentalized bento lunch boxes popular in Japan, stores each search session as a project workspace that keeps track of the most interesting or relevant parts of visited web pages. It's not necessary for a user to keep every site open to avoid losing information.
"With Bento, we're structuring the entire experience through these projects," said
Aniket Kittur, associate professor in the Human-Computer Interaction Institute (HCII). The projects are stored for later use, can be handed off to others, or can be moved to different devices. "This is a new way to browse that eliminates the tab overload that limits the usefulness of conventionalbrowsers."
Someone planning a trip to Alaska with a conventional browser, for instance, might create multiple tabs for each location or point of interest, as well as additional tabs for hotels, restaurants and activities. With Bento, users can identify pages they found useful, trash unhelpful pages and keep track of what they have read on each page. Bento also bundles the search result pages into task cards, such as accommodations, day trips, transportation, etc. The project could be shared with other people planning their own trips.
Kittur's research team will present a report on their mobile web browser at CHI 2018, the Conference on Human Factors in Computing Systems, April 21-26 in Montreal, Canada. A research version of the Bento Browser for iPhones is available for download from the App Store.
Mobile devices now initiate more web searches than do desktop computers. Yet the limitations of conventional browsers become more acute on mobile devices. Not only is screen size limited, but mobile users are more often interrupted and distracted and have more difficulty saving and organizing information, said Nathan Hahn, a Ph.D. student in HCII.
In user studies that compared Bento with the Safari browser, users said they preferred Bento in cases where they wanted to continue a search later and wanted to pick up where they left off. They also said Bento kept their searches better organized. Though most participants found it easier to learn how to use Safari, they found Bento more useful for finding pages and believed that Bento made their mobile searches more effective.
One goal was to design Bento to work in a way that complements the way the mind works.
"If we get a lot of people using it, Bento could serve as a microscope to study how people make sense of information," Kittur said, noting people who use the research version are asked to consent to their searches becoming part of the research data. "This might lead to a new type of artificial intelligence," he added.
Bento Browser is now a search app for iPhones, but its capabilities for organizing searches and helping people resume searches also could benefit people using desktop computers. To accommodate those users, Kittur's team is now preparing aBento plug-in for the Chrome browser.

Thursday, 3 May 2018

Using your arm as a smartwatch touchscreen- Self-Contained Projection Smartwatch

Smartwatches as devices for messaging and search are far eclipsed by the desktop, laptop, tablet and phones, for obvious reasons, namely their tiny touchscreens. In tech parlance, the smartwatch "input-output bottleneck" is lamentable, as it is a headache trying to work with such a small space. As Andrew Liszewski quipped inGizmodo, "human fingers aren't getting any smaller, and interacting with a tiny touchscreen has proven a major disincentive for many would-be adopters of the technology."
But what if you can sport a smartwatch and just use your arm as a touchscreen? Researchers at Carnegie Mellon University have come up with a prototype that is a smartwatch with built-in projector to accomplish just that.
"Our custom smartwatch hardware consists of five primary components," the team said. The components are logic board, projector, depth-sensing array, metal enclosure and battery.
Should their prototype ever progress into a real product, its usefulness would be easily appreciated, as one might consider what a smartwatch is especially good for—and compiling a movie script is not included; it serves other purposes.
Liszewski in Gizmodo remarked,
"Projectors have always been the most convenient way to create a temporary but large screen, which makes them the ideal way to improve the functionality of smartwatches where you only occasionally want a larger touchscreen. Most of the time I only want to see the time or who's texting me when I glance down at my Apple Watch."
The team is calling their prototype LumiWatch.
Wearables with "projected, on-skin touch interfaces have been a long-standing yet elusive goal, largely written off as science fiction," said Robert Xiao, one of the team. They have a prototype that is a functional and self-contained projection smartwatch. What do they mean by describing their watch as self-contained? It performs an independent operation—no tether to a smartphone or computer. The smartwatch offers roughly 40 square centimeters of interact area, around five times that of a typical smartwatch.
Wareable said the combined hardware and software could deliver the 1024 x 600-pixel resolution touchscreen display offering up an interactive surface on the arm. "From that touchscreen, you can tap and swipe to help replicate the kind of gesture support you'd get on a smartphone."
The display is bright enough to be seen outside as well as indoors. "You swipe left to unlock the watch, and apps are then displayed along your arm," said The Verge. Their watch logic board design involved a Qualcomm APQ8026 system-on-chip, which integrates a 1.2GHz quad-core CPU, 450MHz GPU, and Bluetooth 4.0 and WiFi controller. They added 768MB of RAM, 4GB flash memory, inertial measurement unit (IMU) and ambient light sensor. The smartwatch runs on Android 5.1.
The team's paper on their work is titled "LumiWatch: On-Arm Projected Graphics and Touch Input." The author affiliations are Carnegie Mellon Human-Computer Interaction Institute and the ASU Tech Co. in Beijing.
The prototype is powered from a 740mAh, 3.8 V (2.8Wh) lithium ion battery. Battery life depends on whether the use is intermittent or continuous. The authors reported on both situtations. "Under average use conditions, we obtain over one hour of continuous projection (with CPU, GPU, WiFi, and Bluetooth all active). In more typical smartwatch usage, where the projection would only be active intermittently, we expect our battery to last roughly one day."
The authors tackled the challenge of heat dissipation; the small size of smartwatches limits their heat dissipation capability. "Vents and fans are a common solution to this problem, but cumbersome in a small and energy-limited form factor," they said.
They said that their current design "dissipates very little heat at the watch-skin interface, as we placed the battery at the bottom of the watch body." They noted future design possibilities: "A future design could incorporate a metallic case thermally coupled to the logic board and projector, which could dissipate some heat to the wearer. A second, more radical possibility is to redesign the watch as a wristband, with hot components better distributed, and also using the watch and straps as heat sinks."

All in all, Gizmodo said, "the LumiWatch is the first smartwatch to integrate a fully-functional laser projector and sensor array, allowing a screen projected on a user's skin to be poked, tapped, and swiped just like a traditional touchscreen."

Wednesday, 2 May 2018

A new way to build road maps from aerial images automatically-RoadTracer

Map apps may have changed our world, but they still haven't mapped all of it yet. Specifically, mapping roads can be difficult and tedious: Even after taking aerial images, companies still have to spend many hours manually tracing out roads. As a result, even companies like Google haven't yet gotten around to mapping the vast majority of the more than 20 million miles of roads across the globe.
Gaps in maps are a problem, particularly for systems being developed for self-driving cars. To address the issue, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have created RoadTracer, an automated method to build road maps that's 45 percent more accurate than existing approaches.
Using data from aerial images, the team says that RoadTracer is not just more accurate, but more cost-effective than current approaches. MIT professor Mohammad Alizadeh says this work will be useful both for tech giants like Google and for smaller organizations without the resources to curate and correct large amounts of errors in maps.
"RoadTracer is well-suited to map areas of the world where maps are frequently out of date, which includes both places with lower population and areas where there's frequent construction," says Alizadeh, one of the co-authors of a new paper about the system. "For example, existing maps for remote areas like rural Thailand are missing many roads. RoadTracer could help make them more accurate."
For example, looking at aerial images of New York City, RoadTracer could correctly map 44 percent of its road junctions, which is more than twice as effective as traditional approaches based on image segmentation that could map only 19 percent.
The paper, which will be presented in June at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah, is a collaboration between CSAIL and the Qatar Computing Research Institute (QCRI).
Alizadeh's MIT co-authors include graduate students Fayven Bastani and Songtao He, and professors Hari Balakrishnan, Sam Madden, and David DeWitt. QCRI co-authors include senior software engineer Sofiane Abbar and Sanjay Chawla, who is the research director of QCRI's Data Analytics Group.
Current efforts to automate maps involve training neural networks to look at aerial images and identify individual pixels as either "road" or "not road." Because aerial images can often be ambiguous and incomplete, such systems also require a post-processing step that's aimed at trying to fill in some of the gaps.
Unfortunately, these so-called "segmentation" approaches are often imprecise: If the model mislabels a pixel, that error will get amplified in the final road map. Errors are particularly likely if the aerial images have trees, buildings, or shadows that obscure where roads begin and end. (The post-processing step also requires making decisions based on assumptions that may not always hold up, like connecting two road segments simply because they are next to each other.)
Meanwhile, RoadTracer creates maps step-by-step. It starts at a known location on the road network, and uses a neural network to examine the surrounding area to determine which point is most likely to be the next part on the road. It then adds that point and repeats the process to gradually trace out the road network one step at a time.
"Rather than making thousands of different decisions at once about whether various pixels represent parts of a road, RoadTracer focuses on the simpler problem of figuring out which direction to follow when starting from a particular spot that we know is a road," says Bastani. "This is in many ways actually a lot closer to how we as humans construct mental models of the world around us."
The team trained RoadTracer on aerial images of 25 cities across six countries in North America and Europe, and then evaluated its mapping abilities on 15 other cities.
"It's important for a mapping system to be able to perform well on cities it hasn't trained on, because regions where automatic mapping holds the most promise are ones where existing maps are non-existent or inaccurate," says Balakrishnan.
Bastani says that the fact that RoadTracer had an error rate that is 45 percent lower is essential to making automatic mapping systems more practical for companies like Google.
"If the error rate is too high, then it is more efficient to map the roads manually from scratch versus removing incorrect segments from the inferred map," says Bastani.
Still, implementing something like RoadTracer wouldn't take people completely out of the loop: The team says that they could imagine the system proposing roadmaps for a large region and then having a human expert come in to double-check the design.
"That said, what's clear is that with a system like ours you could dramatically decrease the amount of tedious work that humans would have to do," Alizadeh says.
Indeed, one advantage to RoadTracer's incremental approach is that it makes it much easier to correct errors; human supervisors can simply correct them and re-run the algorithm from where they left off, rather than continue to use imprecise information that trickles down to other parts of the map.
Of course, aerial images are just one piece of the puzzle. They don't give you information about roads that have overpasses and underpasses, since those are impossible to ascertain from above. As a result, the team is also separately developing algorithms that can create maps from GPS data, and working to merge these approaches into a single system for mapping.


Thursday, 26 April 2018

Occipital closes $12M more as it strives to build a ‘perception engine’-Scanning

Sensors are growing more and more sophisticated as we build machines that can interpret the world with more precision than we can.
Occipital is aiming to do this as effectively and cheaply as possible as it morphs its 3D scanning technology into a product that can do much, much more. The company has closed $12 million of what it plans to be a $15 million Series C. The round is being led by the Foundry Group. The company has raised about $33 million to date.
With this round, Occipital is looking to expand its tracking platform into what it calls its “Perception Engine,” which will require it making some deeper moves into machine learning, pushing into technologies that reside outside of simply defining the geometry of a space. The startup wants its tracking tech to recognize people and identify objects.
SF-based Occipital has moved around a little bit within the tracking space as it’s sought to find a worthwhile niche. The company’s $379 Structure sensor allows users to 3D scan their environment and objects using the high-frame-rate depth camera that attaches to the back of an iOS device. Occipital’s Canvas software solution allows customers to use the camera to develop more refined CAD models. The company later introduced a mixed reality dev kit that brought positional tracking to the iPhone.
The company’s latest bet is bringing quality inside-out tracking to products with its monoSLAM tech that tracks devices in space using a single camera and an IMU. Though AR/VR remains an obvious application, Occipital announced at CES that it has partnered with Misty Robotics. As its focus moves closer toward the market that Intel’s RealSense and other large companies have been aiming to capture, Occipital has its sights set higher than before with some new cash to do so.


Facebook announces way to “Clear History” of apps and sites you’ve clicked-Analytics to developers

Today is a big day for Facebook   . The company is hosting its F8 developer conference in San Jose today and just before the event is sch...