Sub menu editing

Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Saturday, May 25, 2019

Robotic hands matching human capabilities


As part of the on-going rise of consumer-level robotics, recent research in artificial intelligence and bio-inspired devices has reached a new plateau of possibilities. Modern robots are now able to fill an increasingly broad scope of roles in both home and work environments. Easily one of the most important (and difficult) abilities for such machines is being able to recognize and interact with various physical objects.


As has often been the case, engineers turned to the human body itself to model both the form and function of new robot apparatuses. Since almost all robots must interact with and handle physical objects in some way, among the most commonly emulated body parts is the hand. Along with their associated computer programs and visual recognition software, robotic hands in the 2000s and 2010s had already boasted some impressive abilities. By the second half of the 2020s, however, the techniques involved have become sufficiently advanced to overcome most of the obstacles faced in previous decades.


        Around this time, some of the first robot hands equalling the capabilities of human hands are appearing in the laboratory. AI programs, using precise visual perception software, are able to recognise countless physical objects and intelligently plan for how they can be manipulated. The robotic hand is therefore able to function autonomously and self-adjust to different objects based on texture, weight and shape. All of this can be accomplished in fluid, natural movements that are largely indistinguishable from those of a real hand.



More companies will turn to synthetic biology for innovation


Biology is already changing the way we live, eat, manufacture, and treat human health. In the next few years, synthetic biology--a $40 billion industry--will be the premier technology of the 21st century that will be used to solve real-world problems facing millions. We will see more collaboration between science, technology, and engineering communities, along with more involvement by the next generation of local leaders solving local problems all around the world, in a safe, ethical, and responsible manner.


Chances are you use at least one product every day that can be, or is, made using synthetic biology.  In fact, you probably used one of these products this morning. For example, in shampoos, the renewable chemical that makes the thick gel turn to soapy foam, known as a surfactant, can be made using synthetic biology. The biotechnology company Manus Bio has also been able to reduce its footprint on the environment through synthetic biology. The company engineered a bacterium that produces a coveted compound found within the stevia plant that can be used in zero-calorie sweeteners.


Conventional methods extract only a fraction of the sweet-tasting compound from the plant, and they often use caustic chemicals. By using synthetic biology to engineer a bacterium that mimicked the plant’s process for making the compound, however, the company was able to synthetically produce the compound with 95 percent purity. Manus Bio plans to start commercially manufacturing the product and selling to industrial partners in 2018.

Cloud-Based Services Will Make Operating Systems Irrelevant



People have been incorrectly predicting the death of operating systems and unique platforms for years going to happen. All kidding aside, it’s becoming increasingly clear as we enter 2019 that cloud-based services are rendering the value of proprietary platforms much less relevant for our day-to-day use. Sure, the initial interface of a device and the means for getting access to applications and data are dependent on the unique vagaries of each tech vendor’s platform, but the real work (or real play) of what we do on our devices is becoming increasingly separated from the artificial world of operating system user interfaces. In both the commercial and consumer realms, it’s now much easier to get access to what it is we want to do, regardless of the underlying platform.



On the commercial side, the increasing power of desktop and application virtualization tools from the likes of Citrix and VMWare, as well as moves like Microsoft’s delivering Windows desktops from the cloud all demonstrate how much simpler it is to run critical business applications on virtually any device. In fact, it will be very interesting to see how open and platform agnostic Apple makes its new video streaming service. If they make it too focused on Apple OS-based device users only, they risk having a very small impact (even with their large and well-heeled installed base), particularly given the strength of the competition.


 Crossover work and consumer products like Office 365 are also shedding any meaningful ties to specific operating systems and instead are focused on delivering a consistent experience across different operating systems, screen sizes, and device types. The concept of abstraction goes well beyond the OS level. New software being developed to leverage the wide range of different AI-specific accelerators from vendors like Qualcomm, Intel, and Arm (AI cores in their case) is being written at a high-enough level to allow them to work across a very heterogeneous computing environment.


3-D printed electronic membranes to prevent heart attacks


An Ultra-thin membrane, specially customised and 3-D printed to exactly match the patient's heart shape. Tiny sensors embedded in a grid of flexible electronics measure pulse, temperature, mechanical strain and pH level with far greater accuracy and detail than was possible using previous methods. Doctors can determine the heart's overall health in real-time and predict an impending heart attack before a patient has any physical signs – intervening when necessary to provide therapy. The device itself can deliver a pulse of electricity in cases of arrhythmia.

This electronic membrane can be installed in a relatively non-invasive procedure, by inserting a catheter into a vein beneath the ribs and then opening the mesh like an umbrella. At present, it is restricted to the exterior surface of the heart. However, new and more advanced versions are now being developed that will go directly inside the heart to treat a variety of disorders – including atrial fibrillation, which affects 2.5 million U.S. adults and 4.5 million people living in the EU, accounts for one-third of hospitalisations for cardiac rhythm disturbances and is a major risk factor for stroke.
Great progress is now being made in the monitoring, diagnosis and treatment of heart disorders, thanks to this and other breakthroughs emerging at this time, all of which are contributing to a rapid decline in mortality rates. By the 2040s, deaths from cardiovascular disease will reach negligible levels in some nations.

On-Device AI Will Start to Shift the Conversation About Data Privacy


One of the least understood aspects of using tech-based devices, mobile applications, and other cloud-based services is how much of our private, personal data is being shared in the process—often without our even knowing it. Over the past year, however, we’ve all started to become painfully aware of how big (and far-reaching) the problem of data privacy is. As a result, there’s been an enormous spotlight placed on data handling practices employed by tech companies. Over the next year, I expect to see many more hardware and component makers take this to the next level by talking not just about their on-device data security features, but also about how on-board AI can enhance privacy.



At the same time, expectations about technology’s ability to personalize these apps and services to meet our specific interests, location, and context have also continued to grow. People want and expect technology to be “smarter” about them, because it makes the process of using these devices and services faster, more efficient, and more compelling. The dilemma, of course, is that to enable this customization requires the use of and access to some level of personal data, usage patterns, etc.

Starting in 2019, more of the data analysis work could start being done directly on devices, without the need to share all of it externally, thanks to the AI-based software and hardware capabilities becoming available on our personal devices. Specifically, the idea of doing on-device AI inference (and even some basic on-device training) is now becoming a practical reality thanks to work by semiconductor-related companies like Qualcomm, Arm, Intel, Apple, and many others.


Categories

machine (16) human (15) medical (13) mobile (12) digital (11) business (10) city (10) internet (10) operate (10) computer (9) graphics (9) electronics (8) power (8) water (8) workplace (8) cloud (7) robots (7) space (7) webpage (7) class (6) vehicles (5) solar (4) automation (3) battery (3) car (3) data (3) television (3) camera (2) building (1) government (1) satellite (1)

Ads

Featured Post