Artificial cell membrane channels composed of DNA can be opened and locked with a key

Graphic shows the bilayer structure of a living cell membrane, composed of phospholipid. A phospholipid consists of a hydrophilic or water-loving head and hydrophobic or water-fearing tail. The hydrophobic tails are sandwiched between two layers of hydrophilic heads. At the center, a channel is shown, permitting the transport of biomolecules. The new study describes a process for creating artificial channels using segments of DNA that insert into cell membranes and allow the reversible transit of various cargo, including ions and proteins. Credit: Biodesign Institute at ASU

Just as countries import a vast array of consumer goods across national borders, so living cells are engaged in a lively import-export business. Their ports of entry are sophisticated transport channels embedded in a cell’s protective membrane. Regulating what kinds of cargo can pass through the borderlands formed by the cell’s two-layer membrane is essential for proper functioning and survival.

In new research, Arizona State University professor Hao Yan, along with ASU colleagues and international collaborators from University College London describe the design and construction of artificial membrane channels, engineered using short segments of DNA. The DNA constructions behave much in the manner of natural cell channels or pores, offering selective transport of ions, proteins, and other cargo, with enhanced features unavailable in their naturally occurring counterparts.

These innovative DNA nanochannels may one day be applied in diverse scientific domains, ranging from biosensing and drug delivery applications to the creation of artificial cell networks capable of autonomously capturing, concentrating, storing, and delivering microscopic cargo.

“Many biological pores and channels are reversibility gated to allow ions or molecules to pass through,” Yan says. Here we emulate these nature processes to engineer DNA nanopores that can be locked and opened in response to external “key” or “lock” molecules.”

Professor Yan is the Milton D. Glick Distinguished Professor in Chemistry and Biochemistry at ASU and directs the Biodesign Center for Molecular Design and Biomimetics. He is also a professor with ASU’s School of Molecular Sciences.

The research findings appear in the current issue of the journal Nature Communications.

All living are enveloped in a unique biological structure, the . The science-y term for such membranes is phospholipid bilayer, meaning the membrane is formed from phosphate molecules attached to a fat or lipid component to form an outer and inner membrane layer.

These inner and outer membrane layers are a bit like a room’s inner and outer walls. But unlike normal walls, the space between inner and outer surfaces is fluid, resembling a sea. Further, cell membranes are said to be semipermeable, allowing designated cargo entry or exit from the cell. Such transport typically occurs when the transiting cargo binds with another molecule, altering the dynamics of the channel structure to permit entry into the cell, somewhat like the opening of the Panama Canal.

Semipermeable cell membranes are necessary for protecting sensitive ingredients within the cell from a hostile environment outside, while allowing the transit of ions, nutrients, proteins and other vital biomolecules.

Researchers, including Yan, have explored the possibility of creating selective membrane channels synthetically, using a technique known as DNA nanotechnology. The basic idea is simple. The double strands of DNA that form the genetic blueprint for all are held together through the base pairing of the molecule’s 4 nucleotides, labelled A, T, C and G. A simple rule applies, namely that A nucleotides always pair with T and C with G. Thus, a DNA segment ATTCTCG would form a complimentary strand with CAAGAGC.

Base pairing of DNA allows the synthetic construction of a virtually limitless array or 2- and 3D nanostructures. Once a structure has been carefully designed, usually with the aid of computer, the DNA segments can be mixed together and will self-assemble in solution into the desired form.

Creating a semipermeable channel using DNA nanotechnology, however, has proven a vexing challenge. Conventional techniques have failed to replicate the structure and capacities of nature-made membrane channels and synthetic DNA nanopores generally permit only one-way transport of cargo.

The new study describes an innovative method, allowing researchers to design and construct a synthetic membrane channel whose permits the transport of larger cargo than natural cell channels can. Unlike previous efforts to create DNA nanopores affixed to membranes, the new technique builds the channel structure step-by-step, by assembling the component DNA segments horizontally with respect to the membrane, rather than vertically. The method permits the construction of nanopores with wider openings, allowing the transport of a greater range of biomolecules.

Further, the DNA design allows the channel to be selectively opened and closed by means of a hinged lid, equipped with a lock and key mechanism. The “keys” consist of sequence-specific DNA strands that bind with the channel’s lid and trigger it to open or close.

In a series of experiments, the researchers demonstrate the ability of the DNA channel to successfully transport cargo of varying sizes, ranging from tiny dye molecules to folded protein structures, some larger than the pore dimensions of natural channels.

The researchers used and transmission electron microscopy to visualize the resulting structures, confirming that they conformed to the original design specifications of the nanostructures.

Fluorescent dye molecules were used to verify that the DNA channels successfully pierced and inserted themselves through the cell’s lipid bilayer, successfully providing selective entry of transport molecules. The transport operation was carried out within 1 hour of formation, a significant improvement over previous DNA nanopores, which typically require 5-8 hours for complete biomolecule transit.

The DNA nanochannels may be used to capture and study proteins and closely examine their interactions with the biomolecules they bind with or study the rapid and complex folding and unfolding of proteins. Such channels could also be used to exert fine-grained control over biomolecules entering cells, offering a new window on targeted drug delivery. Many other possible applications are likely to arise from the newfound ability to custom design artificial, self-assembling transport channels.



More information:
Swarup Dey et al, A reversibly gated protein-transporting membrane channel made of DNA, Nature Communications (2022). DOI: 10.1038/s41467-022-28522-2

Citation:
Artificial cell membrane channels composed of DNA can be opened and locked with a key (2022, May 10)
retrieved 11 May 2022
from https://phys.org/news/2022-05-artificial-cell-membrane-channels-dna.html

% %item_read_more_button%% Hexbyte Glen Cove Educational Blog Repost With Backlinks — #metaverse #vr #ar #wordpress

Hexbyte Glen Cove Drones and artificial intelligence aid hunt for fallen meteorite in outback Western Australia

Hexbyte Glen Cove

Seamus Anderson with the meteorite at the location it was found using the drone seen in the background. Credit: Curtin University

Curtin researchers have recovered a freshly fallen meteorite after pinpointing its exact location on the vast Nullarbor Plain in Western Australia, with a new technique that uses a drone to collect footage of the landscape that is then scanned using artificial intelligence.

Lead researcher, graduate student Seamus Anderson from Curtin’s Space Science and Technology Centre (SSTC), said the find at Kybo Station late last year was a successful demonstration of the new method, which had the potential to greatly increase the number of recovered meteorites, particularly those observed as they fall through the atmosphere.

“A camera-fitted flies over and collects images of the fall zone, which are transferred to our field computer where an algorithm scans each image for meteorites and features that resemble them,” Mr. Anderson said.

“Although our algorithm was ‘trained’ on data collected from past searches, we brought with us previously recovered meteorites and imaged them on the ground at the fall site, to create local data with which to further train the algorithm.

“Meteorite searches usually involve a group of people walking over a large predicted impact area but our new method requires only about one tenth the amount of labor and time and has a much higher likely success rate, which is evident in the fact we located and recovered the meteorite within four days of being on site at Kybo Station.”

Mr. Anderson said such meteorites, which are tracked by the Desert Fireball Network (DFN), were special because they gave a geologic sample of the particular region of the from which they originated, contributing to an overall understanding of the geology of the solar system.

“New solutions such as our drone technique help make investments in space science and the study of meteorites more cost-effective and impactful,” Mr. Anderson said.

“Beyond increasing our understanding of the solar system, the study of meteorites is useful for many reasons. For example, meteorites often contain a higher concentration of rare and valuable such as cobalt, which is crucial to the construction of modern batteries.

“Also, by gaining a better understanding of how is distributed throughout the solar system, we may one day mine asteroids for precious resources, instead of scrounging for the finite amounts of them on Earth and perhaps harming precious ecosystems in the process.

“Other potential applications for our new approach using drones and artificial intelligence include wildlife management and conservation, as our model could be easily retrained to detect objects other than meteorites, such as plants and animals.”



More information:
Seamus L. Anderson et al, Successful Recovery of an Observed Meteorite Fall Using Drones and Machine Learning (2022). arXiv:2203.01466 [astro-ph.EP] arxiv.org/abs/2203.01466

Citation:
Drones and artificial intelligence aid hunt for fallen meteorite in outback Western Australia (2022, March 14)
retrieved 15 March 2022
from https://phys.org/news/2022-03-drones-artificial-intelligence-aid-fallen.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study

% %item_read_more_button%% Hexbyte Glen Cove Educational Blog Repost With Backlinks — #metaverse #vr #ar #wordpress

Hexbyte Glen Cove Studying the big bang with artificial intelligence

Hexbyte Glen Cove

A quark gluon plasma after the collision of two heavy nuclei. Credit: TU Wien

It could hardly be more complicated: tiny particles whir around wildly with extremely high energy, countless interactions occur in the tangled mess of quantum particles, and this results in a state of matter known as “quark-gluon plasma”. Immediately after the Big Bang, the entire universe was in this state; today it is produced by high-energy atomic nucleus collisions, for example at CERN.

Such processes can only be studied using high-performance computers and highly complex computer simulations whose results are difficult to evaluate. Therefore, using artificial intelligence or machine learning for this purpose seems like an obvious idea. Ordinary machine-learning algorithms, however, are not suitable for this task. The mathematical properties of particle physics require a very special structure of neural networks. At TU Wien (Vienna), it has now been shown how neural networks can be successfully used for these challenging tasks in particle physics.

Neural networks

“Simulating a quark-gluon as realistically as possible requires an extremely large amount of computing time,” says Dr. Andreas Ipp from the Institute for Theoretical Physics at TU Wien. “Even the largest supercomputers in the world are overwhelmed by this.” It would therefore be desirable not to calculate every detail precisely, but to recognize and predict certain properties of the plasma with the help of .

Therefore, neural networks are used, similar to those used for image recognition: Artificial “neurons” are linked together on the computer in a similar way to neurons in the brain—and this creates a that can recognize, for example, whether or not a cat is visible in a certain picture.

When applying this technique to the quark-gluon plasma, however, there is a serious problem: the quantum fields used to mathematically describe the particles and the forces between them can be represented in various different ways. “This is referred to as gauge symmetries,” says Ipp. “The basic principle behind this is something we are familiar with: if I calibrate a measuring device differently, for example if I use the Kelvin scale instead of the Celsius scale for my thermometer, I get completely different numbers, even though I am describing the same physical state. It’s similar with quantum theories—except that there the permitted changes are mathematically much more complicated.” Mathematical objects that look completely different at first glance may in fact describe the same physical state.

Gage symmetries built into the structure of the network

“If you don’t take these gage symmetries into account, you can’t meaningfully interpret the results of the computer simulations,” says Dr. David I. Müller. “Teaching a neural network to figure out these gage symmetries on its own would be extremely difficult. It is much better to start out by designing the structure of the neural network in such a way that the gage symmetry is automatically taken into account—so that different representations of the same physical state also produce the same signals in the neural network,” says Müller. “That is exactly what we have now succeeded in doing: We have developed completely new network layers that automatically take gage invariance into account.” In some test applications, it was shown that these networks can actually learn much better how to deal with the simulation data of the quark-gluon plasma. 

“With such neural networks, it becomes possible to make predictions about the system—for example, to estimate what the will look like at a later point in time without really having to calculate every single intermediate step in time in detail,” says Andreas Ipp. “And at the same time, it is ensured that the system only produces results that do not contradict gage symmetry—in other words, results which make sense at least in principle.”

It will be some time before it is possible to fully simulate atomic core collisions at CERN with such methods, but the new type of provides a completely new and promising tool for describing physical phenomena for which all other computational methods may never be powerful enough.

The research was published in Physical Review Letters.



More information:
Matteo Favoni et al, Lattice Gauge Equivariant Convolutional Neural Networks, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.128.032003

Citation:
Studying the big bang with artificial intelligence (2022, January 25)
retri

% %item_read_more_button%% Hexbyte Glen Cove Educational Blog Repost With Backlinks — #metaverse #vr #ar #wordpress

Hexbyte Glen Cove Fractured artificial rock helps crack a 54-year-old mystery

Hexbyte Glen Cove

Princeton researchers have developed a technique to better understand how polymers flow through small channels under pressure. Credit: David Kelly Crow

Princeton researchers have solved a 54-year-old puzzle about why certain fluids strangely slow down under pressure when flowing through porous materials, such as soils and sedimentary rocks. The findings could help improve many important processes in energy, environmental and industrial sectors, from oil recovery to groundwater remediation.

The fluids in question are called solutions. These solutions—everyday examples of which include cosmetic creams and the mucus in our noses—contain dissolved polymers, or materials made of large molecules with many repeating subunits. Typically, when they’re put under pressure, polymer solutions become less viscous and faster. But when going through materials with lots of tiny holes and channels, the solutions tend to become more viscous and gunky, reducing their flow rates.

To get at the root of the problem, the Princeton researchers devised an innovative experiment using a see-through porous medium made of tiny glass beads—a transparent artificial rock. This lucid medium allowed the researchers to visualize a polymer solution’s movement. The experiment revealed that the long-baffling increase in viscosity in porous media happens because the polymer solution’s flow becomes chaotic, much like turbulent air on an airplane ride, swirling into itself and gumming up the works.

“Surprisingly, until now, it has not been possible to predict the viscosity of polymer solutions flowing in porous media,” said Sujit Datta, an assistant professor of chemical and biological engineering at Princeton and senior author of the study appearing Nov. 5 in the journal Science Advances. “But in this paper, we’ve now finally shown these predictions can be made, so we’ve found an answer to a problem that has eluded researchers for over a half-century.”

“With this study, we finally made it possible to see exactly what is happening underground or within other opaque, porous media when polymer solutions are being pumped through,” said Christopher Browne, a Ph.D. student in Datta’s lab and the paper’s lead author.

Browne ran the experiments and built the experimental apparatus, a small rectangular chamber randomly packed with tiny borosilicate glass beads. The setup, akin to an artificial sedimentary rock, spanned only about half the length of a pinky finger. Into this faux rock, Browne pumped a common polymer solution laced with fluorescent latex microparticles to help see the solution’s flow around the beads. The researchers formulated the polymer solution so the material’s refractive index offset light distortion from the beads and made the whole setup transparent when saturated. Datta’s lab has innovatively used this technique to create see-through soil for studying ways to counter agricultural droughts, among other investigations.

Browne then zoomed in with a microscope on the pores, or holes between the beads, which occur on the scale of 100 micrometers (millionths of a meter) in size, or similar to the width of a human hair, in order to examine the through each pore. As the polymer solution worked its way through the porous medium, the fluid’s flow became chaotic, with the fluid crashing back into itself and generating turbulence. What’s surprising is that, typically, fluid flows at these speeds and in such tight pores are not turbulent, but “laminar”: the fluid moves smoothly and steadily. As the polymers navigated the pore space, however, they stretched out, generating forces that accumulated and generated turbulent flow in different pores. This effect grew more pronounced when pushing the solution through at higher pressures.

“I was able to see and record all these patchy regions of instability, and these regions really impact the transport of the solution through the medium,” said Browne.

Princeton researchers have developed a technique to better understand how polymers flow through small channels under pressure. Credit: David Kelly Crow

The Princeton researchers used data gathered from the experiment to formulate a way to predict the behavior of polymer solutions in real-life situations.

Gareth McKinley, a professor of mechanical engineering at the Massachusetts Institute of Technology who was not involved in the study, offered comments on its significance.

“This study shows definitively that the large increase in the macroscopically observable pressure drop across a porous medium has its microscopic physical origins in viscoelastic flow instabilities that occur on the pore scale of the porous medium,” McKinley said.

Given that viscosity is one of the most fundamental descriptors of fluid flow, the findings not only help deepen understanding of polymer solution flows and chaotic flows in general, but also provide quantitative guidelines to inform their applications at large scales in the field.

“The new insights we have generated could help practitioners in diverse settings determine how to formulate the right polymer and use the right pressures needed to carry out the task at hand,” said Datta. “We’re particularly excited about the findings’ application in groundwater remediation.”

Because polymer solutions are inherently goopy, environmental engineers inject the solutions into the ground at highly contaminated sites such as abandoned chemical factories and industrial plants. The viscous solutions help push out trace contaminants from the affected soils. Polymer solutions likewise aid in oil recovery by pushing oil out of the pores in underground rocks. On the remediation side, polymer solutions enable “pump and treat,” a common method for cleaning up groundwater polluted with industrial chemicals and metals that involves bringing the water to a surface treatment station. “All these applications of polymer solutions, and more, such as in separations and manufacturing processes, stand to benefit from our findings,” said Datta.

Overall, the new findings on flow rates in brought together ideas from multiple fields of scientific inquiry, ultimately disentangling what had started out as a long-frustrating, complex problem.

“This work draws connections between studies of polymer physics, turbulence, and geoscience, following the flow of fluids in rocks underground as well as through aquifers,” said Datta. “It’s a lot of fun sitting at the interface between all these different disciplines.”



More information:
Christopher A. Browne et al, Elastic turbulence generates anomalous flow resistance in porous media, Science Advances (2021). DOI: 10.1126/sciadv.abj2619. www.science.org/doi/10.1126/sciadv.abj2619

Citation:
Fractured artificial rock helps crack a 54-year-old mystery (2021, November 5)
retrieved 7 November 2021
from https://phys.org/news/2021-11-fractured-artificial-year-old-mystery.html

Read More Hexbyte Glen Cove Educational Blog Repost With Backlinks —

Hexbyte Glen Cove Artificial intelligence helps improve NASA's eyes on the Sun thumbnail

Hexbyte Glen Cove Artificial intelligence helps improve NASA’s eyes on the Sun

Hexbyte Glen Cove

This image shows seven of the ultraviolet wavelengths observed by the Atmospheric Imaging Assembly on board NASA’s Solar Dynamics Observatory. The top row is observations taken from May 2010 and the bottom row shows observations from 2019, without any corrections, showing how the instrument degraded over time. Credit: Luiz Dos Santos/NASA GSFC

A group of researchers is using artificial intelligence techniques to calibrate some of NASA’s images of the Sun, helping improve the data that scientists use for solar research. The new technique was published in the journal Astronomy & Astrophysics on April 13, 2021.

A solar telescope has a tough job. Staring at the Sun takes a harsh toll, with a constant bombardment by a never-ending stream of solar particles and intense sunlight. Over time, the sensitive lenses and sensors of solar telescopes begin to degrade. To ensure the data such instruments send back is still accurate, scientists recalibrate periodically to make sure they understand just how the instrument is changing.

Launched in 2010, NASA’s Solar Dynamics Observatory, or SDO, has provided high-definition images of the Sun for over a decade. Its images have given scientists a detailed look at various solar phenomena that can spark space weather and affect our astronauts and technology on Earth and in space. The Atmospheric Imagery Assembly, or AIA, is one of two imaging instruments on SDO and looks constantly at the Sun, taking images across 10 wavelengths of ultraviolet light every 12 seconds. This creates a wealth of information of the Sun like no other, but—like all Sun-staring instruments—AIA degrades over time, and the data needs to be frequently calibrated.

Since SDO’s launch, scientists have used to calibrate AIA. Sounding rockets are smaller rockets that typically only carry a few instruments and take short flights into space—usually only 15 minutes. Crucially, sounding rockets fly above most of Earth’s atmosphere, allowing instruments on board to to see the ultraviolet wavelengths measured by AIA. These wavelengths of light are absorbed by Earth’s atmosphere and can’t be measured from the ground. To calibrate AIA, they would attach an ultraviolet telescope to a sounding and compare that data to the measurements from AIA. Scientists can then make adjustments to account for any changes in AIA’s data.

There are some drawbacks to the sounding rocket method of calibration. Sounding rockets can only launch so often, but AIA is constantly looking at the Sun. That means there’s downtime where the calibration is slightly off in between each sounding rocket calibration.

“It’s also important for deep space missions, which won’t have the option of sounding rocket calibration,” said Dr. Luiz Dos Santos, a solar physicist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, and lead author on the paper. “We’re tackling two problems at once.”

Virtual calibration

With these challenges in mind, scientists decided to look at other options to calibrate the instrument, with an eye towards constant calibration. Machine learning, a technique used in artificial intelligence, seemed like a perfect fit.

As the name implies, machine learning requires a computer program, or algorithm, to learn how to perform its task.

The top row of images show the degradation of AIA’s 304 Angstrom wavelength channel over the years since SDO’s launch. The bottom row of images are corrected for this degradation using a machine learning algorithm. Credit: Luiz Dos Santos/NASA GSFC

First, researchers needed to train a to recognize solar structures and how to compare them using AIA data. To do this, they give the algorithm images from sounding rocket calibration flights and tell it the correct amount of calibration they need. After enough of these examples, they give the algorithm similar images and see if it would identify the correct calibration needed. With enough data, the algorithm learns to identify how much calibration is needed for each image.

Because AIA looks at the Sun in multiple wavelengths of light, researchers can also use the algorithm to compare specific structures across the wavelengths and strengthen its assessments.

To start, they would teach the algorithm what a looked like by showing it solar flares across all of AIA’s wavelengths until it recognized solar flares in all different types of light. Once the program can recognize a solar flare without any degradation, the algorithm can then determine how much degradation is affecting AIA’s current images and how much calibration is needed for each.

“This was the big thing,” Dos Santos said. “Instead of just identifying it on the same wavelength, we’re identifying structures across the wavelengths.”

This means researchers can be more sure of the calibration the algorithm identified. Indeed, when comparing their virtual calibration data to the sounding rocket calibration data, the machine learning program was spot on.

With this new process, researchers are poised to constantly calibrate AIA’s images between calibration rocket flights, improving the accuracy of SDO’s data for researchers.

Machine learning beyond the Sun

Researchers have also been using machine learning to better understand conditions closer to home.

One group of researchers led by Dr. Ryan McGranaghan—Principal Data Scientist and Aerospace Engineer at ASTRA LLC and NASA Goddard Space Flight Center—used machine learning to better understand the connection between Earth’s magnetic field and the ionosphere, the electrically charged part of Earth’s upper atmosphere. By using data science techniques to large volumes of data, they could apply machine learning techniques to develop a newer model that helped them better understand how energized particles from space rain down into Earth’s atmosphere, where they drive space weather.

As advances, its scientific applications will expand to more and more missions. For the future, this may mean that deep space missions—which travel to places where rocket flights aren’t possible—can still be calibrated and continue giving accurate data, even when getting out to greater and greater distances from Earth or any stars.



More information:
Luiz F. G. Dos Santos et al, Multichannel autocalibration for the Atmospheric Imaging Assembly using machine learning, Astronomy & Astrophysics (2021). DOI: 10.1051/0004-6361/202040051

Citation:
Artificial intelligence helps improve NASA’s eyes on the Sun (2021, July 24)
retrieved 24 July 2021
from https://phys.org/news/2021-07-artificial-intelligence-nasa-eyes-sun.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Read More Hexbyte Glen Cove Educational Blog Repost With Backlinks —

Hexbyte Glen Cove Korean artificial sun sets the new world record of 20-sec-long operation at 100 million degrees thumbnail

Hexbyte Glen Cove Korean artificial sun sets the new world record of 20-sec-long operation at 100 million degrees

Hexbyte Glen Cove

Credit: National Research Council of Science & Technology

The Korea Superconducting Tokamak Advanced Research(KSTAR), a superconducting fusion device also known as the Korean artificial sun, set the new world record as it succeeded in maintaining the high temperature plasma for 20 seconds with an ion temperature over 100 million degrees.

On November 24(Tuesday), the KSTAR Research Center at the Korea Institute of Fusion Energy (KEF) announced that in a joint research with the Seoul National University (SNU) and Columbia University of the United States, it succeeded in continuous operation of for 20 seconds with an ion- higher than 100 million degrees, which is one of the core conditions of nuclear fusion in the 2020 KSTAR Plasma Campaign

It is an achievement to extend the 8 second plasma operation time during the 2019 KSTAR Plasma Campaign by more than 2 times. In its 2018 experiment, the KSTAR reached the plasma ion temperature of 100 million degrees for the first time (retention time: about 1.5 seconds)

To re-create that occur in the sun on Earth, hydrogen isotopes must be placed inside a fusion device like KSTAR to create a plasma state where ions and electrons are separated, and ions must be heated and maintained at high temperatures.

So far, there have been other fusion devices that have briefly managed plasma at temperatures of 100 million degrees or higher. None of them broke the barrier of maintaining the operation for 10 seconds or longer. It is the operational limit of normal-conducting device and it was difficult maintain a stable plasma state in the fusion device at such high temperatures for a long time.

In its 2020 experiment, the KSTAR improved the performance of the Internal Transport Barrier(ITB) mode, one of the next generation plasma operation modes developed last year and succeeded in maintaining the plasma state for a long period of time, overcoming the existing limits of the ultra-high-temperature plasma operation.

Director Si-Woo Yoon of the KSTAR Research Center at the KFE explained, “The technologies required for long operations of 100 million- plasma are the key to the realization of fusion energy, and the KSTAR’s success in maintaining the high-temperature plasma for 20 seconds will be an important turning point in the race for securing the technologies for the long high-performance plasma operation, a critical component of a commercial nuclear fusion reactor in the future.”

“The success of the KSTAR experiment in the long, high-temperature operation by overcoming some drawbacks of the ITB modes brings us a step closer to the development of technologies for realization of nuclear fusion energy,” added Yong-Su Na, professor at the department of Nuclear Engineering, SNU, who has been jointly conducting the research on the KSTAR plasma operation.

Dr. Young-Seok Park of Columbia University who contributed to the creation of the high temperature plasma said “We are honored to be involved in such an important achievement made in KSTAR. The 100 million-degree ion temperature achieved by enabling efficient core plasma heating for such a long duration demonstrated the unique capability of the superconducting KSTAR device, and will be acknowledged as a compelling basis for high performance, steady state fusion plasmas.”

The KSTAR began operating the device last August and plans to continue its plasma generation experiment until December 10, conducting a total of 110 plasma experiments that include high-performance plasma operation and plasma disruption mitigation experiments, which are joint research experiments with domestic and overseas research organizations.

In addition to the success in high temperature plasma operation, the KSTAR Research Center conducts experiments on a variety of topics, including ITER researches, designed to solve complex problems in fusion research during the remainder of the experiment period.

The KSTAR is going to share its key experiment outcomes in 2020 including this success with fusion researchers across the world in the IAEA Fusion Energy Conference which will be held in May.

The final goal of the KSTAR is to succeed in a continuous operation of 300 seconds with an ion temperature higher than 100 million degrees by 2025.

KFE President Suk Jae Yoo stated, “I am so glad to announce the new launch of the KFE as an independent research organization of Korea. The KFE will continue its tradition of under-taking challenging researches to achieve the goal of mankind: the realization of nuclear energy,” he continued.

As of November 20, 2020, the KFE, formerly the National Fusion Research Institute, an affiliated organization of the Korea Basic Science Institute, was re-launched as an independent research organization.



Provided by
National Research Council of Science & Technology

Citation:
Korean artificial sun sets the new world record of 20-sec-long operation at 100 million degrees (2020, December 24)
retrieved 24 December 2020
from https://phys.org/news/2020-12-korean-artificial-sun-world-sec-long.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no

Read More Hexbyte Glen Cove Educational Blog Repost With Backlinks —