Advances in biological imaging have given scientists unprecedented datasets with extremely high resolutions, yet data interpretation tools are working overtime to keep up. This is particularly evident in the case of cryo-electron tomograms (cryo-ET), where the samples exhibit inherently low contrast due to the limited electron dose that can be applied during imaging before radiation damage occurs.

The segmentation of these cell tomograms remains a challenging task, one that is most accurately performed by human beings with an extensive amount of time on their hands. Since this isn’t a feasible way to interpret large datasets, a group of Berkeley Lab scientists recently developed and tested several machine learning techniques organized in a learning pipeline to segment and identify cryo-ET cell membrane structures. A paper describing their approach, “A machine learning pipeline for membrane segmentation of cryo-electron tomograms,” was published this month in the Journal of Computational Science.

“One of the main difficulties with these types of images is that they’re very noisy,” said Chao Yang, a senior scientist in the Applied Mathematics and Computational Research Division at Lawrence Berkeley National Laboratory (Berkeley Lab) and one of the paper’s authors. “It’s the main challenge when you are trying to detect some type of structure or segment the images — it could take one scientist several months to get one tomogram all segmented correctly.”

Read more…

The post Berkeley Lab Scientists Create Machine Learning Pipeline for Interpreting Large Tomography Datasets appeared first on National Artificial Intelligence Initiative.

National Artificial Intelligence Initiative Read More