Wednesday, June 21, 2017

AI that can shoot down fighter planes helps treat bipolar disorder

The artificial intelligence that can blow human pilots out of the sky in air-to-air combat accurately predicted treatment outcomes for bipolar disorder, according to a new medical study by the University of Cincinnati.
The findings open a world of possibility for using AI, or machine learning, to treat disease, researchers said.
David Fleck, an associate professor at the UC College of Medicine, and his co-authors used artificial intelligence called "genetic fuzzy trees" to predict how bipolar patients would respond to lithium.
Bipolar disorder, depicted in the TV show "Homeland" and the Oscar-winning "Silver Linings Playbook," affects as many as six million adults in the United States or 4 percent of the adult population in a given year.
"In psychiatry, treatment of bipolar disorder is as much an art as a science," Fleck said. "Patients are fluctuating between periods of mania and depression. Treatments will change during those periods. It's really difficult to treat them appropriately during stages of the illness."
The study authors found that even the best of eight common models used in treating bipolar disorder predicted who would respond to lithium treatment with 75 percent accuracy. By comparison, the model UC researchers developed using AI predicted how patients would respond to lithium 100 percent of the time. Even more impressively, the UC model predicted the actual reduction in manic symptoms after lithium treatment with 92 percent accuracy.
The study authors found that even the best of the eight most common treatments was only effective half the time. But the model UC researchers developed using AI predicted how patients would respond to lithium treatment with 88 percent accuracy and 80 percent accuracy in validation.
It turns out that the same kind of artificial intelligence that outmaneuvered Air Force pilots last year in simulation after simulation at Wright-Patterson Air Force Base is equally adept at making beneficial decisions that can help doctors treat disease. The findings were published this month in the journal Bipolar Disorders.
"What this shows is that an effort funded for aerospace is a game-changer for the field of medicine. And that is awesome," said Kelly Cohen, a professor in UC's College of Engineering and Applied Science.
Cohen's doctoral graduate Nicholas Ernest is founder of the company Psibernetix, Inc., an artificial intelligence development and consultation company. Psibernetix is working on applications such as air-to-air combat, cybersecurity and predictive analytics. Ernest's fuzzy logic algorithm is able to sort vast possibilities to arrive at the best choices in literally the blink of an eye.
"Normally the problems our AIs solve have many, many googolplexes of possible solutions -- effectively infinite," study co-author Ernest said.
His team developed a genetic fuzzy logic called Alpha capable of shooting down human pilots in simulations, even when the computer's aircraft intentionally was handicapped with a slower top speed and less nimble flight characteristics. The system's autonomous real-time decision-making shot down retired U.S. Air Force Col. Gene Lee in every engagement.
"It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment," Lee said last year. "It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed."
The American Institute of Aeronautics and Astronautics honored Cohen and Ernest this year for their "advancement and application of artificial intelligence to large scale, meaningful and challenging aerospace-related problems."
Cohen spent much of his career working with fuzzy-logic based AI in drones. He used a sabbatical from the engineering college to approach the UC College of Medicine with an idea: What if they could apply the amazing predictive power of fuzzy logic to a particularly nettlesome medical problem?
Medicine and avionics have little in common. But each entails an ordered process -- a vast decision tree -- to arrive at the best choices. Fuzzy logic is a system that relies not on specific definitions but generalizations to compensate for uncertainty or statistical noise. This artificial intelligence is called "genetic fuzzy" because it constantly refines its answer, tossing out the lesser choices in a way analogous to the genetic processes of Darwinian natural selection.
Cohen compares it to teaching a child how to recognize a chair. After seeing just a few examples, any child can identify the object people sit in as a chair, regardless of its shape, size or color.
"We do not require a large statistical database to learn. We figure things out. We do something similar to emulate that with fuzzy logic," Cohen said.
Cohen found a receptive audience in Fleck, who was working with UC's former Center for Imaging Research. After all, who better to tackle one of medical science's hardest problems than a rocket scientist? Cohen, an aerospace engineer, felt up to the task.
Ernest said people should not conflate the technology with its applications. The algorithm he developed is not a sentient being like the villains in the "Terminator" movie franchise but merely a tool, he said, albeit a powerful one with seemingly endless applications.
"I get emails and comments every week from would-be John Connors out there who think this will lead to the end of the world," Ernest said.
Ernest's company created EVE, a genetic fuzzy AI that specializes in the creation of other genetic fuzzy AIs. EVE came up with a predictive model for patient data called the LITHium Intelligent Agent or LITHIA for the bipolar study.
"This predictive model taps into the power of fuzzy logic to allow you to make a more informed decision," Ernest said.
And unlike other types of AI, fuzzy logic can describe in simple language why it made its choices, he said.
The researchers teamed up with Dr. Caleb Adler, the UC Department of Psychiatry and Behavioral Neuroscience vice chairman of clinical research, to examine bipolar disorder, a common, recurrent and often lifelong illness. Despite the prevalence of mood disorders, their causes are poorly understood, Adler said.
"Really, it's a black box," Adler said. "We diagnose someone with bipolar disorder. That's a description of their symptoms. But that doesn't mean everyone has the same underlying causes."
Selecting the appropriate treatment can be equally tricky.
"Over the past 15 years there has been an explosion of treatments for mania. We have more options. But we don't know who is going to respond to what," Adler said. "If we could predict who would respond better to treatment, you would save time and consequences."
With appropriate care, bipolar disorder is a manageable chronic illness for patients whose lives can return to normal, he said.
UC's new study, funded in part by a grant from the National Institute of Mental Health, identified 20 patients who were prescribed lithium for eight weeks to treat a manic episode. Fifteen of the 20 patients responded well to the treatment.
The algorithm used an analysis of two types of patient brain scans, among other data, to predict with 100 percent accuracy which patients responded well and which didn't. And the algorithm also predicted the reductions in symptoms at eight weeks, an achievement made even more impressive by the fact that only objective biological data were used for prediction rather than subjective opinions from experienced physicians.
"This is a huge first step and ultimately something that will be very important to psychiatry and across medicine," Adler said.
How much potential does this have to revolutionize medicine?
"I think it's unlimited," Fleck said. "It's a good result. The best way to validate it is to get a new cohort of individuals and apply their data to the system."
Cohen is less reserved in his enthusiasm. He said the model could help personalize medicine to individual patients like never before, making health care both safer and more affordable. Fewer side-effects means fewer hospital visits, less secondary medication and better treatments.
Now the UC researchers and Psibernetix are working on a new study applying fuzzy logic to diagnosing and treating concussions, another condition that has bedeviled doctors.
"The impact on society could be profound," Cohen said.

Cow herd behavior is fodder for complex systems analysis

The image of grazing cows in a field has long conjured up a romantic nostalgia about a relaxed pace of rural life. With closer inspection, however, researchers have recognized that what appears to be a randomly dispersed herd peacefully eating grass is in fact a complex system of individuals in a group facing differing tensions. A team of mathematicians and a biologist has now built a mathematical model that incorporates a cost function to behavior in such a herd to understand the dynamics of such systems.
Complex systems research looks at how systems display behaviors beyond those capable from individual components in isolation. This rapidly emerging field can be used to elucidate phenomena observed in many other disciplines including biology, medicine, engineering, physics and economics.
"Complex systems science seeks to understand not just the isolated components of a given system, but how the individual components interact to produce 'emergent' group behaviour," said Erik Bollt, director of the Clarkson Center for Complex Systems Science and a professor of mathematics and of electrical and computer engineering.
Bollt conducted the work with his team, lead-authored by post-doctoral fellow Kelum Gajamannage, which was reported this week in the journal Chaos, from AIP Publishing.
"Cows grazing in a herd is an interesting example of a complex system," said Bollt. "An individual cow performs three major activities throughout an ordinary day. It eats, it stands while it carries out some digestive processes, and then it lies down to rest."
While this process seems simple enough, there is also a balancing of group dynamics at work.
"Cows move and eat in herds to protect themselves from predators," said Bollt. "But since they eat at varying speeds, the herd can move on before the slower cows have finished eating. This leaves these smaller cows facing a difficult choice: Continue eating in a smaller, less safe group, or move along hungry with the larger group. If the conflict between feeding and keeping up with a group becomes too large, it may be advantageous for some animals to split into subgroups with similar nutritional needs."
Bollt and his colleagues incorporate a cost function into their model to capture these tensions. This adds mathematical complexity to their work, but it became apparent that it was necessary after discussing cows' behavior with their co-author, Marian Dawkins, a biologist with experience researching cows.
"Some findings from the simulation were surprising," Bollt said. "One might have thought there would be two static groups of cows -- the fast eaters and the slow eaters -- and that the cows within each group carried out their activities in a synchronized fashion. Instead we found that there were also cows that moved back and forth between the two."
"The primary cause is that this complex system has two competing rhythms," Bollt also said. "The large-sized animal group had a faster rhythm and the small-sized animal group had a slower rhythm. To put it into context, a cow might find itself in one group, and after some time the group is too fast. Then it moves to the slower group, which is too slow, but while moving between the two groups, the cow exposes itself more to the danger of predators, causing a tension between the cow's need to eat and its need for safety."
The existing model and cost function could be used as a basis for studying other herding animals. In the future, there may even be scope to incorporate it into studies about human behavior in groups. "The cost function is a powerful tool to explore outcomes in situations where there are individual and group-level tensions at play," said Bollt.

Sound waves direct particles to self-assemble, self-heal



An elegantly simple experiment with floating particles self-assembling in response to sound waves has provided a new framework for studying how seemingly lifelike behaviors emerge in response to external forces.
Scientists at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) demonstrated how particles, floating on top of a glycerin-water solution, synchronize in response to acoustic waves blasted from a computer speaker.
The study, published today (Monday, June 19) in the journal Nature Materials, could help address fundamental questions about energy dissipation and how it allows living and nonliving systems to adapt to their environment when they are out of thermodynamic equilibrium.
"Dynamic self-assembly under non-equilibrium is not only important in physics, but also in our living world," said Xiang Zhang, corresponding author of the paper and a senior faculty scientist at Berkeley Lab's Materials Sciences Division with a joint appointment at UC Berkeley. "However, the underlying principles governing this are only partially understood. This work provides a simple yet elegant platform to study and understand such phenomena."
To hear some physicists describe it, this state of non-equilibrium, characterized by the ability to constantly change and evolve, is the essence of life. It applies to biological systems, from cells to ecosystems, as well as to certain nonbiological systems, such as weather or climate patterns. Studying non-equilibrium systems gets theorists a bit closer to understanding how life -- particularly intelligent life -- emerges.
However, it is complicated and hard to study because non-equilibrium systems are open systems, Zhang said. He noted that physicists like to study things that are stable and in closed systems.
"We show that individually 'dumb' particles can self-organize far from equilibrium by dissipating energy and emerge with a collective trait that is dynamically adaptive to and reflective of their environment," said study co-lead author Chad Ropp, a postdoctoral researcher in Zhang's group. "In this case, the particles followed the 'beat' of a sound wave generated from a computer speaker."
Notably, after the researchers intentionally broke up the particle party, the pieces would reassemble, showing a capacity to self-heal.
Ropp noted that this work could eventually lead to a wide variety of "smart" applications, such as adaptive camouflage that responds to sound and light waves, or blank-slate materials whose properties are written on demand by externally controlled drives.
While previous studies have shown that particles are capable of self-assembly in response to an external force, this paper presents a general framework that researchers can use to study the mechanisms of adaptation in non-equilibrium systems.
"The distinction in our work is that we can predict what happens -- how the particles will behave -- which is unexpected," said another co-lead author Nicolas Bachelard, who is also a postdoctoral researcher in Zhang's group.
As the sound waves traveled at a frequency of 4 kilohertz, the scattering particles moved along at about 1 centimeter per minute. Within 10 minutes, the collective pattern of the particles emerged, where the distance between the particles was surprisingly non-uniform. The researchers found that the self-assembled particles exhibited a phononic bandgap -- a frequency range in which acoustic waves cannot pass -- whose edge was inextricably linked, or "enslaved," to the 4 kHz input.
"This is a characteristic that was not present with the individual particles," said Bachelard. "It only appeared when the particles collectively organized, which is why we call this an emergent property of our structure under non-equilibrium conditions."
The experimental design could hardly have been simpler. For the waveguide, the researchers used a 2-meter-long acrylic tube that contained a 5-millimeter-deep pool of a glycerin-water solution. The particles were made from straws floating on top of a flat piece of plastic, and the sound source came from off-the-shelf computer speakers that researchers directed into the tube via a plastic funnel. Measuring the sound waves proved to be the most technical part of the experiment.
"This is something you could do yourself in your garage," said Ropp. "It was a dirt-cheap experiment with parts that are available at your corner hardware store. At one point, we needed bigger straws, so I went out and bought some boba tea. The setup was extremely simple, but it showed the physics beautifully."
The experiment focused on acoustic waves because soundproofing was easier to achieve, but the principles underlying the behavior they observed would be applicable to any wave system, the researchers said.
This fundamental research could form the basis for developing intelligent networks that perform simple non-algorithmic computation, with a future toward systems that perform sentient-like decision making, the researchers said.
"I can think of parallels to artificial brains, with sections that respond to different frequency 'brain waves' that are malleable and reconfigurable," said Ropp.

Firefly gene illuminates ability of optimized CRISPR-Cpf1 to efficiently edit human genome

Scientists on the Florida campus of The Scripps Research Institute (TSRI) have improved a state-of-the-art gene-editing technology to advance the system's ability to target, cut and paste genes within human and animal cells -- and broadening the ways the CRISPR-Cpf1 editing system may be used to study and fight human diseases.
Professor Michael Farzan, co-chair of TSRI's Department of Immunology and Microbiology, and TSRI Research Associate Guocai Zhong improved the efficiency of the CRISPR-Cpf1 gene editing system by incorporating guide RNAs with "multiplexing" capability. Guide RNAs are short nucleic acid strings that lead the CRISPR molecular scissors to their intended gene targets. The TSRI discovery means multiple genetic targets in a cell may be hit by each CRISPR-Cpf1 complex.
"This system simplifies and significantly improves the efficiency of simultaneous editing of multiple genes, or multiple sites of a single gene," Zhong said. "This could be very useful when multiple disease-related genes or multiple sites of a disease-related gene need to be targeted."
"This approach improves gene editing for a number of applications," Farzan added. "The system makes some applications more efficient and other applications possible."
This study was published as an advanced online paper in the journal Nature Chemical Biology on June 19, 2017.
TSRI Advance Makes CRISPR More Efficient
Short for "Clustered Regularly Interspaced Short Palindromic Repeat," the CRISPR gene editing system exploits an ancient bacterial immune defense process. Some microbes thwart viral infection by sequestering a piece of a virus' foreign genetic material within its own DNA, to serve as a template. The next time the viral sequence is encountered by the microbe, it's recognized immediately and cut up for disposal with the help of two types of RNA. Molecules called guide RNAs provide the map to the invader, and CRISPR effector proteins act as the scissors that cut it apart.
Over the last five years, the CRISPR gene editing system has revolutionized microbiology and renewed hopes that genetic engineering might eventually become a useful treatment for disease. But time has revealed the technology's limitations. For one, gene therapy currently requires using a viral shell to serve as the delivery package for the therapeutic genetic material. The CRISPR molecule is simply too large to fit with multiple guide RNAs into the most popular and useful viral packaging system.
The new study from Farzan and colleagues helps solve this problem by letting scientists package multiple guide RNAs.
This advance could be important if gene therapy is to treat diseases such as hepatitis B, Farzan said. After infection, hepatitis B DNA sits in liver cells, slowly directing the production of new viruses, ultimately leading to liver damage, cirrhosis and even cancer. The improved CRISPR-Cpf1 system, with its ability to 'multiplex,' could more efficiently digest the viral DNA, before the liver is irrevocably damaged, he said.
"Efficiency is important. If you modify 25 cells in the liver, it is meaningless. But if you modify half the cells in the liver, that is powerful," Farzan said. "There are other good cases -- say muscular dystrophy -- where if you can repair the gene in enough muscle cells, you can restore the muscle function."
Two types of these molecular scissors are now being widely used for gene editing purposes: Cas9 and Cpf1. Farzan said he focused on Cpf1 because it is more precise in mammalian cells. The Cpf1 molecule they studied was sourced from two types of bacteria, Lachnospiraceae bacterium and Acidaminococus sp., whose activity has been previously studied in E. coli. A key property of these molecules is they are able to grab their guide RNAs out of a long string of such RNA; but it was not clear that it would work with RNA produced from mammalian cells. Guocai tested this idea by editing a firefly bioluminescence gene into the cell's chromosome. The modified CRISPR-Cpf1 system worked as anticipated.
"This means we can use simpler delivery systems for directing the CRISPR effector protein plus guide RNAs," Farzan said. "It's going to make the CRISPR process more efficient for a variety of applications."
Looking forward, Farzan said the Cpf1 protein needs to be more broadly understood so that its utility in delivering gene therapy vectors can be further expanded.
In addition to Farzan and Zhong, the authors of the study, "Cpf1 proteins excise CRISPR RNAs from mRNA transcripts in mammalian cells," included Haimin Wang, and Mai H. Tran of TSRI; and Yujun Li of TSRI and Wu Lien-Teh Institute, Harbin Medical University, Harbin, China.

X-ray eyes in the sky: Drones and WiFi for 3-D through-wall imaging



Researchers at UC Santa Barbara professor Yasamin Mostofi's lab have given the first demonstration of three-dimensional imaging of objects through walls using ordinary wireless signal. The technique, which involves two drones working in tandem, could have a variety of applications, such as emergency search-and-rescue, archaeological discovery and structural monitoring.
"Our proposed approach has enabled unmanned aerial vehicles to image details through walls in 3D with only WiFi signals," said Mostofi, a professor of electrical and computer engineering at UCSB. "This approach utilizes only WiFi RSSI measurements, does not require any prior measurements in the area of interest and does not need objects to move to be imaged."
The proposed methodology and experimental results appeared in the Association for Computing Machinery/Institute of Electrical and Electronics Engineers International Conference on Information Processing in Sensor Networks (IPSN) in April, 2017.
In their experiment, two autonomous octocopters take off and fly outside an enclosed, four-sided brick house whose interior is unknown to the drones. While in flight, one copter continuously transmits a WiFi signal, the received power of which is measured by the other copter for the purpose of 3D imaging. After traversing a few proposed routes, the copters utilize the imaging methodology developed by the researchers to reveal the area behind the walls and generate 3D high-resolution images of the objects inside. The 3D image closely matches the actual area.
"High-resolution 3D imaging through walls, such as brick walls or concrete walls, is very challenging, and the main motivation for the proposed approach," said Chitra R. Karanam, the lead Ph.D. student on this project.
This development builds on previous work in the Mostofi Lab, which has pioneered sensing and imaging with everyday radio frequency signals such as WiFi. The lab published the first experimental demonstration of imaging with only WiFi in 2010, followed by several other works on this subject.
"However, enabling 3D through-wall imaging of real areas is considerably more challenging due to the considerable increase in the number of unknowns," said Mostofi. While their previous 2D method utilized ground-based robots working in tandem, the success of the 3D experiments is due to the copters' ability to approach the area from several angles, as well as to the new proposed methodology developed by her lab.
The researchers' approach to enabling 3D through-wall imaging utilizes four tightly integrated key components. First, they proposed robotic paths that can capture the spatial variations in all the three dimensions as much as possible, while maintaining the efficiency of the operation.
Second, they modeled the 3D unknown area of interest as a Markov Random Field to capture the spatial dependencies, and utilized a graph-based belief propagation approach to update the imaging decision of each voxel (the smallest unit of a 3D image) based on the decisions of the neighboring voxels.
Third, in order to approximate the interaction of the transmitted wave with the area of interest, they used a linear wave model.
Finally, they took advantage of the compressibility of the information content to image the area with a very small number of WiFi measurements (less than 4 percent). It is noteworthy that their setup consists solely of off-the-shelf units such as copters, WiFi transceivers and Tango tablets.

How six cups of ground coffee can improve nose, throat surgery



Imagine plopping six cups of coffee grounds on the heads of patients just before they are wheeled into the operating room to have nose or throat surgery.
In essence, that is what a team of Vanderbilt University engineers are proposing in an effort to improve the reliability of the sophisticated "GPS" system that surgeons use for these delicate operations. They have designed a "granular jamming cap" filled with coffee grounds that does a better job of tracking patient head movements than current methods. They are disclosing the novel design and data on its effectiveness at the International Conference on Information Processing in Computer-Assisted Interventions in Barcelona on June 20.
Of course, the coffee grounds are not loose: they form a thin layer inside a stretchy silicone headpiece, which looks something like a black latex swim cap decorated with reflective dots. After the cap is placed on the patient's head, it is attached to a vacuum pump that sucks the air out of the cap, jamming the tiny grounds together to form a rigid layer that conforms closely to the shape of the patient's head. (This is the same effect that turns vacuum-packed coffee into solid bricks.)
Before surgery, a special scanner is used to map the location of the dots relative to key features on the patient's head: a process called registration. Then, during surgery an overhead camera observes the position of the dots allowing the navigation system to accurately track the position of the patient's head when the surgeon repositions it. The computer uses this information to combine a CT scan, which provides a detailed 3-D view of the bone and soft tissue hidden inside the patient's head, with the position of the instruments the surgeon is using and displays them together in real time on a monitor in the operating room.
"These are very delicate operations and a sophisticated image guidance system has been developed to help the surgeons, but they don't trust the system because sometimes it is spot on and other times it is off the mark," said Robert Webster, associate professor of mechanical engineering and otolaryngology, who is developing a surgical robot designed specifically for endonasal surgery. "When we heard about this, we began wondering what was causing these errors and we decided to investigate."
When Webster and his research team looked into the matter, they were surprised by what they found. They discovered it wasn't the hardware or the software in the guidance system that was causing the problem. It was the way the reflective markers were attached to the patient's head that was at fault. Typically, these "fiducial markers" are attached by an elastic headband and double-backed tape and are subject to jarring and slipping. Their tests found that skin movement and accidental bumps by operating room staff both produced large tracking errors.
"The basic assumption is that, after registration, the spatial relationships between the patient's head and the fiducial markers remains constant," said Patrick Wellborn, the graduate student who is making the presentation. "Unfortunately, that is not the case. For one thing, studies have shown that the skin on a person's forehead can move as much as a half an inch relative to the skull. And accidentally bumping or dragging cables over the headband can also produce significant targeting errors."
In fact, previous research has found that when everything goes well, the guidance system produces targeting errors of about 2 millimeters but, in about one operation out of seven, the target error is much larger, forcing the surgeon to redo the registration process.
"Actually, we do have a solution to this problem but it involves drilling and attaching the markers directly to the skull...which we don't like to do because it is painful and it's a step backwards from the majority of what we are doing," said Assistant Professor of Otolaryngology Paul Russell, who is collaborating with the engineers on the project.
So the team began thinking up alternative, non-invasive methods to attach these critical markers. Webster recalled some experiments that were done using coffee grounds to help robots grip irregularly shaped objects. Bladders filled with coffee grounds were built into the robot's gripper. When it grabbed an object, the bladders conformed to its shape. Then a vacuum was pulled, the coffee grounds became rigid and locked the object into place. The researchers decided to see if this technique could be applied to the problem.
In the last three years, they have gone through a number of designs. They began with headbands that had coffee-ground-filled bags over the temples. Their tests showed that these models could reduce the targeting error by about 50 percent. But the engineers still weren't satisfied.
Then, Wellborn, who was taking over the project, had a brainstorming session with fellow graduate student Richard Hendrick. Among the materials that his predecessor had left behind was a latex bald cap. "That sparked the idea of caps in general," Wellborn recalled. "We wanted something elastic that was form fitting, which led to the idea of a swim cap."
In addition to fitting extremely tightly to the head, the new design had another advantage. The headband system has only three fiducial markers attached on the ends of three thin rods to form a triangle. The new design allowed them to attach several dozen markers directly to the surface of the cap, which the researchers believe will also contribute to improving the guidance system's accuracy.
They designed three tests to determine how well this "granular jamming cap" performed relative to the current headband in reducing targeting error:
  • They "bumped" both them from a number of different directions with a tennis ball filled with plastic particles swinging on the end of a string. They found that the cap reduced targeting errors by 83 percent.
  • They simulated the case where a cable or other piece of equipment is accidentally pushed against the two by applying forces ranging from four to six pounds in different directions. They determined that the cap outperformed the headband by 76 percent when the forces were applied to the headband and by 92 percent when they were applied to the markers.
  • They tested the effects of head repositioning by having an experienced surgeon reposition test subjects' heads six to seven times. In this case, the cap proved to have 66 percent lower error rates than the headband.
On the strength of these results, Vanderbilt University has applied for a patent on the design and the technology is available for licensing. (Interested parties should contact the Vanderbilt Center for Technology Transfer and Commercialization.)
"It's a very clever way -- that doesn't involve drilling holes in patients' skulls -- to greatly improve the accuracy of the guidance system when we are operating in the middle of a person's skull: a zone where the accuracy of the current system is inadequate," said Russell.

Robot uses deep learning and big data to write and play its own music

A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning.
Researchers fed the robot nearly 5,000 complete songs -- from Beethoven to the Beatles to Lady Gaga to Miles Davis -- and more than 2 million motifs, riffs and licks of music. Aside from giving the machine a seed, or the first four measures to use as a starting point, no humans are involved in either the composition or the performance of the music.
The first two compositions are roughly 30 seconds in length. The robot, named Shimon, can be seen and heard playing them at https://www.youtube.com/watch?v=j82nYLOnKtM and https://www.youtube.com/watch?v=6MSk5PP9KUA.
Ph.D. student Mason Bretan is the man behind the machine. He's worked with Shimon for seven years, enabling it to "listen" to music played by humans and improvise over pre-composed chord progressions. Now Shimon is a solo composer for the first time, generating the melody and harmonic structure on its own.
"Once Shimon learns the four measures we provide, it creates its own sequence of concepts and composes its own piece," said Bretan, who will receive his doctorate in music technology this summer at Georgia Tech. "Shimon's compositions represent how music sounds and looks when a robot uses deep neural networks to learn everything it knows about music from millions of human-made segments."
Bretan says this is the first time a robot has used deep learning to create music. And unlike its days of improvising, when it played monophonically, Shimon is able to play harmonies and chords. It's also thinking much more like a human musician, focusing less on the next note, as it did before, and more on the overall structure of the composition.
"When we play or listen to music, we don't think about the next note and only that next note," said Bretan. "An artist has a bigger idea of what he or she is trying to achieve within the next few measures or later in the piece. Shimon is now coming up with higher-level musical semantics. Rather than thinking note by note, it has a larger idea of what it wants to play as a whole."
Shimon was created by Bretan's advisor, Gil Weinberg, director of Georgia Tech's Center for Music Technology.
"This is a leap in Shimon's musical quality because it's using deep learning to create a more structured and coherent composition," said Weinberg, a professor in the School of Music. "We want to explore whether robots could become musically creative and generate new music that we humans could find beautiful, inspiring and strange."
Shimon will create more pieces in the future. As long as the researchers feed it a different seed, the robot will produce something different each time -- music that the researchers can't predict. In the first piece, Bretan fed Shimon a melody comprised of eighth notes. It received a sixteenth note melody the second time, which influenced it to generate faster note sequences.

Bretan acknowledges that he can't pick out individual songs that Shimon is referencing. He is able to recognize classical chord progression and influences of artists, such as Mozart, for example. "They sound like a fusion of jazz and classical," said Bretan, who plays the keyboards and guitar in his free time. "I definitely hear more classical, especially in the harmony. But then I hear chromatic moving steps in the first piece -- that's definitely something you hear in jazz."
Shimon's debut as a solo composer was featured in a video clip in the Consumer Electronic Show (CES) keynote and will have its first live performance at the Aspen Ideas Festival at the end of June. It's the latest project within Weinberg's lab. He and his students have also created a robotic prosthesis for a drummer, a robotic third arm for all drummers, and an interactive robotic companion that plays music from a phone and dances to the beat.

Story Source:
Materials provided by Georgia Institute of Technology. Original written by Jason Maderer.

How to build software for a computer 50 times faster than anything in the world


Imagine you were able to solve a problem 50 times faster than you can now. With this ability, you have the potential to come up with answers to even the most complex problems faster than ever before.
Researchers behind the U.S. Department of Energy's (DOE) Exascale Computing Project want to make this capability a reality, and are doing so by creating tools and technologies for exascale supercomputers -- computing systems at least 50 times faster than those used today. These tools will advance researchers' ability to analyze and visualize complex phenomena such as cancer and nuclear reactors, which will accelerate scientific discovery and innovation.
Developing layers of software that support and connect hardware and applications is critical to making these next-generation systems a reality.
"These software environments have to be robust and flexible enough to handle a broad spectrum of applications, and be well integrated with hardware and application software so that applications can run and operate seamlessly," said Rajeev Thakur, a computer scientist at the DOE's Argonne National Laboratory and the director of software technology for the Exascale Computing Project (ECP).
Researchers in Argonne's Mathematics and Computer Science Division are collaborating with colleagues from five other core ECP DOE national laboratories -- Lawrence Berkeley, Lawrence Livermore, Sandia, Oak Ridge and Los Alamos -- in addition to other labs and universities.
Their goal is to create new and adapt existing software technologies to operate at exascale by overcoming challenges found in several key areas, such as memory, power and computational resources.
Checkpoint/restart
Argonne computer scientist Franck Cappello leads an ECP project focused on advanced checkpoint/restart, a defense mechanism for withstanding failures that happen when applications are running.
"Given their complexity, faults in high-performance systems are a common occurrence, and some of them lead to failures that cause parallel applications to crash," Cappello said.
"Many ECP applications already feature checkpoint/restart, but because we're moving towards an even more complex system at exascale, we need more sophisticated methods for it. For us, that means providing an effective and efficient checkpoint/restart for ECP applications that lack it, and providing other applications a more efficient and scalable checkpoint/restart."
Cappello also leads a project that focuses on reducing the large amounts of data that is generated by these machines, which is expensive to store and communicate effectively.
"We're developing techniques that can reduce data volume by at least a factor of 10. The problem with this is that you add some margin of error when you reduce the data," Cappello said.
"The focus then is on controlling the margin of error; you want to control the error so it doesn't affect the scientific result in the end while still being efficient at reduction, and this is one of the challenges we are looking at."
Memory
For information that is stored on exascale systems, researchers need data management controls for memory, power and processing cores. Argonne computer scientist Pete Beckman is investigating methods for managing all three through a project known as Argo.
"The efficiency of memory and storage have to keep up with the increase in computation rates and data movement requirements that will exist at exascale," Beckman said.
"But how memory is arranged in systems and the technology used for it is also changing, and has more layers," he said. "So we have to account for these changes, in addition to anticipating and designing around the future needs of the applications that will use these systems."
With added layers of memory on exascale systems, researchers must develop complementary software for regulating these memory technologies that give users control over the process.
"Having controls in place is important because where you choose to store information affects how quickly you can retrieve it," Beckman said.
Power
Another key resource that Beckman and Argo Project researchers are studying is power. As with memory, methods for allocating power resources could speed up or slow computation within a high-performance system. Researchers are interested in developing software technologies that could enhance users' control over this resource.
"Power limits may not be at the top of the list when you're dealing with smaller systems, but when you're talking about tens of megawatts of power, which is what we'll need in the future, how an application uses that power becomes an important distinguishing characteristic," Beckman said.
"The goal for us is to achieve a level of control that maximizes the user's abilities while maintaining efficiency and minimizing cost," he said.
Processing Cores
Ultra-fine controls are also needed for managing cores within an exascale system.
"With each generation of supercomputers we keep adding processing cores, but the system software that makes them work needs ways to partition and manage all the cores," Beckman said. "And since we're dealing millions of cores, even making small adjustments can have a tremendous impact on what we're able to do; improving performance by say, two to three percent, is equivalent to thousands of laptops' worth of computation."
One concept Beckman and fellow researchers are exploring to better manage cores is containerization, a method for grouping a select number of cores together and treating them as a unit, or "container," that can be controlled independently.
"The tools we have now to manage cores are not as precise, making it harder to regulate how much work is being done by one set of cores over another," Beckman said. "But we're borrowing and adapting container concepts into high-performance computing to give users the ability to operate and manage how they're using those cores more carefully and directly."
Software Libraries
Applications rely on software libraries -- high-quality, reusable software collections -- to support simulations and other functionalities. To make these capabilities accessible at exascale, Argonne researchers are working to scale existing libraries.
"Libraries provide important capabilities, including solutions to numerical problems," said Argonne mathematician Barry Smith, who leads a project focused on scaling two libraries known as PETSc and TAO.
PETSc and TAO are widely used for large-scale numerical simulations. PETSc is a library that provides solutions to specific numerical calculations. TAO is a library that provides solutions to large-scale optimization problems, such as calculating the most cost-effective strategy for reloading fuel rods in a nuclear reactor.
In addition to scaling diverse software libraries, ECP scientists are also looking for ways to improve their quality and compatibility.
"Libraries have traditionally been developed independently, and due to the different strategies used to design and implement them, it's been difficult to use multiple libraries in combinations. But large applications, like those that will run at exascale, need to be able to use all the layers of the software stack in combination," said Argonne computational scientist Lois Curfman McInnes.
McInnes is co-leading the xSDK project, which is determining community policies to regulate the implementation of software packages. Such policies will make it easier for diverse libraries to be compatible with one another.
"These efforts bring us one step closer to realizing a robust and agile exascale environment that can aid scientists in tackling great challenges," McInnes said.

Story Source:

Mathematicians deliver formal proof of Kepler Conjecture

A team led by mathematician Thomas Hales has delivered a formal proof of the Kepler Conjecture, which is the definitive resolution of a problem that had gone unsolved for more than 300 years. The paper is now available online through Forum of Mathematics, Pi, an open access journal published by Cambridge University Press. This paper not only settles a centuries-old mathematical problem, but is also a major advance in computer verification of complex mathematical proofs.
The Kepler Conjecture was a famous problem in discrete geometry, which asked for the most efficient way to cram spheres into a given space. The answer, while not difficult to guess (it's exactly how oranges are stacked in a supermarket), had been remarkably difficult to prove. Hales and Ferguson originally announced a proof in 1998, but the solution was so long and complicated that a team of a dozen referees spent years working on checking it before giving up.
Explains Henry Cohn, editor of Forum of Mathematics, Pi: "The verdict of the referees was that the proof seemed to work, but they just did not have the time or energy to verify everything comprehensively. The proof was published in 2005, and no irreparable flaws were ever identified, but it was an unsatisfactory situation that the proof was seemingly beyond the ability of the mathematics community to check thoroughly. To address this situation and establish certainty, Hales turned to computers, using techniques of formal verification. He and a team of collaborators wrote out the entire proof in extraordinary detail using strict formal logic, which a computer program then checked with perfect rigor. This paper is the result of their completed work."
Thomas Hales is the Mellon Professor of Mathematics at the University of Pittsburgh. His research spans discrete geometry, representation theory, motivic integration, and formal theorem proving.

Story Source:

Multi-dimensional universe' in brain networks

Using mathematics in a novel way in neuroscience, scientists demonstrate that the brain operates on many dimensions, not just the 3 dimensions that we are accustomed to



For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions -- ground-breaking work that is beginning to reveal the brain's deepest architectural secrets.
Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.
The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.
"We found a world that we had never imagined," says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, "there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions."
Markram suggests this may explain why it has been so hard to understand the brain. "The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly."
If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.
"Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures -- the trees in the forest -- and see the empty spaces -- the clearings -- all at the same time," explains Hess.
In 2015, Blue Brain published the first digital copy of a piece of the neocortex -- the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain's wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.
When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. "The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner," says Levi. "It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates."
The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional "sandcastles" the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. "They may be 'hiding' in high-dimensional cavities," Markram speculates.

Story Source:
Materials provided by Frontiers.

Plastic made from sugar and carbon dioxide



Some biodegradable plastics could in the future be made using sugar and carbon dioxide, replacing unsustainable plastics made from crude oil, following research by scientists from the Centre for Sustainable Chemical Technologies (CSCT) at the University of Bath.
  • Polycarbonate is used to make drinks bottles, lenses for glasses and in scratch-resistant coatings for phones, CDs and DVDs
  • Current manufacture processes for polycarbonate use BPA (banned from use in baby bottles) and highly toxic phosgene, used as a chemical weapon in World War One
  • Bath scientists have made alternative polycarbonates from sugars and carbon dioxide in a new process that also uses low pressures and room temperature, making it cheaper and safer to produce
  • This new type of polycarbonate can be biodegraded back into carbon dioxide and sugar using enzymes from soil bacteria
  • This new plastic is bio-compatible so could in the future be used for medical implants or as scaffolds for growing replacement organs for transplant
Polycarbonates from sugars offer a more sustainable alternative to traditional polycarbonate from BPA, however the process uses a highly toxic chemical called phosgene. Now scientists at Bath have developed a much safer, even more sustainable alternative which adds carbon dioxide to the sugar at low pressures and at room temperature.
The resulting plastic has similar physical properties to those derived from petrochemicals, being strong, transparent and scratch-resistant. The crucial difference is that they can be degraded back into carbon dioxide and sugar using the enzymes found in soil bacteria.
The new BPA-free plastic could potentially replace current polycarbonates in items such as baby bottles and food containers, and since the plastic is bio-compatible, it could also be used for medical implants or as scaffolds for growing tissues or organs for transplant.
Dr Antoine Buchard, Whorrod Research Fellow in the University's Department of Chemistry, said: "With an ever-growing population, there is an increasing demand for plastics. This new plastic is a renewable alternative to fossil-fuel based polymers, potentially inexpensive, and, because it is biodegradable, will not contribute to growing ocean and landfill waste.
"Our process uses carbon dioxide instead of the highly toxic chemical phosgene, and produces a plastic that is free from BPA, so not only is the plastic safer, but the manufacture process is cleaner too."
Dr Buchard and his team at the Centre for Sustainable Chemical Technologies, published their work in a series of articles in the journals Polymer Chemistry and Macromolecules.
In particular, they used nature as inspiration for the process, using the sugar found in DNA called thymidine as a building block to make a novel polycarbonate plastic with a lot of potential.
PhD student and first author of the articles, Georgina Gregory, explained: "Thymidine is one of the units that makes up DNA. Because it is already present in the body, it means this plastic will be bio-compatible and can be used safely for tissue engineering applications.
"The properties of this new plastic can be fine-tuned by tweaking the chemical structure -- for example we can make the plastic positively charged so that cells can stick to it, making it useful as a scaffold for tissue engineering." Such tissue engineering work has already started in collaboration with Dr Ram Sharma from Chemical Engineering, also part of the CSCT.
The researchers have also looked at using other sugars such as ribose and mannose. Dr Buchard added: "Chemists have 100 years' experience with using petrochemicals as a raw material so we need to start again using renewable feedstocks like sugars as a base for synthetic but sustainable materials. It's early days, but the future looks promising."
This work was supported by Roger and Sue Whorrod (Fellowship to Dr Buchard), EPSRC (Centre for Doctoral Training in Sustainable Chemical Technologies), and a Royal Society research Grant.

Story Source:

Technology which makes electricity from urine also kills pathogens, researchers find



A scientific breakthrough has taken an emerging biotechnology a step closer to being used to treat wastewater in the Developing World.
Researchers at the University of the West of England (UWE Bristol) (Ieropoulos & Greenman) have discovered that technology they have developed which has already been proven to generate electricity through the process of cleaning organic waste, such as urine, also kills bacteria harmful to humans.
Experts have shown that a special process they have developed in which wastewater flows through a series of cells filled with electroactive microbes can be used to attack and destroy a pathogen -- the potentially deadly Salmonella.
It is envisaged that the microbial fuel cell (MFC) technology could one day be used in the Developing World in areas lacking sanitation and installed in homes in the Developed World to help clean waste before it flows into the municipal sewerage network, reducing the burden on water companies to treat effluent.
Professor Ioannis Ieropoulos, who is leading the research, said it was necessary to establish the technology could tackle pathogens in order for it to be considered for use in the Developing World.
The findings of the research have been published in leading scientific journal PLOS ONE. Professor Ieropoulos, Director of the Bristol BioEnergy Centre, based in the Bristol Robotics Laboratory at UWE Bristol, said it was the first time globally it had been reported that pathogens could be destroyed using this method.
He said: "We were really excited with the results -- it shows we have a stable biological system in which we can treat waste, generate electricity and stop harmful organisms making it through to the sewerage network."
It had already been established that the MFC technology created by Dr Ieropoulos' team could successfully clean organic waste, including urine, to the extent that it could be safely released into the environment. Through the same process, electricity is generated -- enough to charge a mobile phone or power lighting in earlier trials.
In the unique system, being developed with funding from the Bill & Melinda Gates Foundation, the organic content of the urine is consumed by microbes inside the fuel cells, breaking it down and creating energy.
For the pathogen experiment, Salmonella enteritidis was added to urine flowing through the system, then checked at the end of the process to identify if bacteria numbers had been reduced. Results revealed pathogen numbers had dropped significantly, beyond minimum requirements used by the sanitation sector.
Other pathogens, including viruses, are now being tested and there are plans for experiments which will establish if the MFC system can eliminate pathogens completely.
John Greenman, Emeritus Professor of Microbiology, said: "The wonderful outcome in this study was that tests showed a reduction in the number of pathogens beyond the minimum expectations in the sanitation world.
"We have reduced the number of pathogenic organisms significantly but we haven't shown we can bring them down to zero -- we will continue the work to test if we can completely eliminate them."
Professor Ieropoulos said his system could be beneficial to the wastewater industry because MFC systems fitted in homes could result in wastewater being cleaner when it reaches the sewerage system.
He said: "Water companies are under pressure to improve treatment and produce cleaner and cleaner water at the end of the process. This means costs are rising, energy consumption levels are high and chemicals that are not good for the environment are being used."

Story Source:

Wireless charging of moving electric vehicles overcomes major hurdle



If electric cars could recharge while driving down a highway, it would virtually eliminate concerns about their range and lower their cost, perhaps making electricity the standard fuel for vehicles.
Now Stanford University scientists have overcome a major hurdle to such a future by wirelessly transmitting electricity to a nearby moving object. Their results are published in the June 15 edition of Nature.
"In addition to advancing the wireless charging of vehicles and personal devices like cellphones, our new technology may untether robotics in manufacturing, which also are on the move," said Shanhui Fan, a professor of electrical engineering and senior author of the study. "We still need to significantly increase the amount of electricity being transferred to charge electric cars, but we may not need to push the distance too much more."
The group built on existing technology developed in 2007 at MIT for transmitting electricity wirelessly over a distance of a few feet to a stationary object. In the new work, the team transmitted electricity wirelessly to a moving LED lightbulb. That demonstration only involved a 1-milliwatt charge, whereas electric cars often require tens of kilowatts to operate. The team is now working on greatly increasing the amount of electricity that can be transferred, and tweaking the system to extend the transfer distance and improve efficiency.
Driving range
Wireless charging would address a major drawback of plug-in electric cars -- their limited driving range. Tesla Motors expects its upcoming Model 3 to go more than 200 miles on a single charge and the Chevy Bolt, which is already on the market, has an advertised range of 238 miles. But electric vehicle batteries generally take several hours to fully recharge. A charge-as-you-drive system would overcome these limitations.
"In theory, one could drive for an unlimited amount of time without having to stop to recharge," Fan explained. "The hope is that you'll be able to charge your electric car while you're driving down the highway. A coil in the bottom of the vehicle could receive electricity from a series of coils connected to an electric current embedded in the road."
Some transportation experts envision an automated highway system where driverless electric vehicles are wirelessly charged by solar power or other renewable energy sources. The goal would be to reduce accidents and dramatically improve the flow of traffic while lowering greenhouse gas emissions.
Wireless technology could also assist GPS navigation of driverless cars. GPS is accurate up to about 35 feet. For safety, autonomous cars need to be in the center of the lane where the transmitter coils would be embedded, providing very precise positioning for GPS satellites.
Magnetic resonance
Mid-range wireless power transfer, as developed at Stanford and other research universities, is based on magnetic resonance coupling. Just as major power plants generate alternating currents by rotating coils of wire between magnets, electricity moving through wires creates an oscillating magnetic field. This field also causes electrons in a nearby coil of wires to oscillate, thereby transferring power wirelessly. The transfer efficiency is further enhanced if both coils are tuned to the same magnetic resonance frequency and are positioned at the correct angle.
However, the continuous flow of electricity can only be maintained if some aspects of the circuits, such as the frequency, are manually tuned as the object moves. So, either the energy transmitting coil and receiver coil must remain nearly stationary, or the device must be tuned automatically and continuously -- a significantly complex process.
To address the challenge, the Stanford team eliminated the radio-frequency source in the transmitter and replaced it with a commercially available voltage amplifier and feedback resistor. This system automatically figures out the right frequency for different distances without the need for human interference.
"Adding the amplifier allows power to be very efficiently transferred across most of the three-foot range and despite the changing orientation of the receiving coil," said graduate student Sid Assawaworrarit, the study's lead author. "This eliminates the need for automatic and continuous tuning of any aspect of the circuits."
Assawaworrarit tested the approach by placing an LED bulb on the receiving coil. In a conventional setup without active tuning, LED brightness would diminish with distance. In the new setup, the brightness remained constant as the receiver moved away from the source by a distance of about three feet. Fan's team recently filed a patent application for the latest advance.
The group used an off-the-shelf, general-purpose amplifier with a relatively low efficiency of about 10 percent. They say custom-made amplifiers can improve that efficiency to more than 90 percent.
"We can rethink how to deliver electricity not only to our cars, but to smaller devices on or in our bodies," Fan said. "For anything that could benefit from dynamic, wireless charging, this is potentially very important."
The study was also co-authored by Stanford research associate Xiaofang Yu.

Story Source:
Materials provided by Stanford University. Original written by Mark Golden and Mark Shwartz.