Addressing the challenges of high-throughput SARS-CoV-2 testing



HubSpot Video


It is now widely recognized that high-throughput sample testing is critical for effective control of SARS-CoV-2, the virus that causes COVID-19. As a result, laboratories worldwide are rapidly scaling up their sample testing capabilities; however, meeting the unprecedented level of sample throughput, while maintaining quality and accuracy, is not without challenge.

In this webinar, organized and hosted by Twist Biosciences, Biogazelle CSO Prof. Jo Vandesompele shared the podium with UgenTec to discuss tips and best practice experiences of setting up and optimizing SARS-CoV-2 testing workflows.

Chris Thorne (Twist Biosciences):
Hello, everyone and good afternoon, good morning, good evening wherever you are and welcome to today's Twist Twisted Webinar and round table discussion on addressing the challenges of high-throughput SARS-CoV-2 testing. My name's Chris Thorne and I am the Senior Manager for Field Marketing here at Twist. Before we begin, just a little bit of housekeeping. As you've noticed, your microphones aren't switched on so we'll keep all lines muted during the webinar. We will be having questions and actually we'll be having what I hope is an interesting discussion after the presentations. If you do have questions, please use the Q&A box at the bottom of your screen rather than submitting things in the chat. Finally, once the event calls it a day, once you leave, we'll be sending you over to a brief survey and we'd be very grateful for any feedback that you can give us by filling that in.

Chris Thorne (Twist Biosciences):
Okay, so the format for today's webinar, if you like, is slightly different from our previous webinars in that we're very excited to have two speakers joining us today presenting on the topic of addressing the challenge of high-throughput SARS-CoV-2 testing. I'd really like to encourage everyone to stay until the end as once we've done the presentations, our speakers will then be joined by Rebecca Nugent, Twist Director of R&D for what should be a really insightful round table style Q&A afterwards. As I said, if you have questions, please do submit them as we go along in the Q&A box.

Chris Thorne (Twist Biosciences):
The first of our speakers today will be Jo Vandesompele who is the CSO and founder of Biogazelle, a CIO specializing in high value genomics applications to support pharmaceutical research, clinical trials and diagnostic test development. Jo is also a professor in functional cancer genomics and applied bioinformatics at Ghent University, Belgium and is a world-renowned expert in the domain of RNA quantification and non-coding RNA.

Chris Thorne (Twist Biosciences):
Our second speaker today will be James Grayson. James is from UgenTec. James is Lead Field Applications Specialist for the Americas at UgenTec where he brings over 10 years of experience automating laboratory workflows to that role. Prior to joining UgenTec, James was the Head of Molecular Workflow Automation at Siemens Healthineers where he worked to capture customer workflow needs and develop customized solutions.

Chris Thorne (Twist Biosciences):
Okay, so Jo, I'm going to handover to you now and let you take it from here.

Jo Vandesompele (Biogazelle)

Slide 3: All right, so thanks very much for the kind introduction. I'm happy to be here. Awaiting a vaccine to prevent or a medicine to cure, the key priority is to prevent its spreading. We can do this by physical distancing, washing our hands, and wearing a face mask when appropriate, combined with massive PCR testing to detect and isolate infected individuals.

Slide 4: Biogazelle holds a unique forefront position in the application of quantitative PCR, digital PCR, and RNA sequencing. We are an ISO17025 accredited contract research organization for qPCR test development and use in clinical trials. The company was founded on a revolutionary method for qPCR normalization (geNorm) and data-analysis (qbase+), with more than 20,000 citations and 1000’s customers worldwide. The company founders co-authored the MIQE guidelines for design, execution, analysis, and reporting of qPCR studies, again with more than 10,000 citations thus far. We have wet-lab validated more than 100,000 qPCR assays, beyond industry standards; the SARS-CoV-2 assay is just one of them. And since April, we are also part of the CSWG, an initiative from JIMB (joint initiative for metrology in biology) from Stanford University, dealing with the development and provision of access to standards, control materials, inter-lab comparisons, and knowledge to perform accurate SARS-CoV-2 tests. The ultimate goal is to build a ‘COVID-19 Diagnostic Standards Development Partnership’.

Slide 5: The SARS-CoV-2 RT-qPCR test consists of 4 main analytical steps, here indicated in pink. It involves the transfer of the primary sample tube to a 96-well plate to increase throughput downstream, followed by viral RNA purification, RT-qPCR detection of the virus, and data-analysis and authorization. Importantly, the entire workflow is pretty complex and is, in fact, a multiparty workflow, involving health care professionals, logistic partners or clinical labs to make test kits (swab and tube), distribute and collect, bring it to the lab, and IT partners for sample registration and reporting of result to the doctor. And if possible, do all this within 24 hours.

Slide 6: To visualize the complex workflow, a nice 3 minute video was made. The link will be shared by the webinar moderator.

Slide 7: In the midst of worldwide supply chain issues of reagents and instruments, we setup a scalable platform in less than 2 weeks adhering to a set of unique design principles. This was only possible because of our expertise in PCR test development and its use in clinical trials. First of all, we wanted our platform to be flexible and modular, such that different components from different suppliers could work together, each optimized for their specific part of the workflow. Flexible and modular: we use different components from different suppliers, each optimized for their specific part of the workflow. Modules can be interchanged and we use automation where it can make a difference.
The platform had to be high throughput with the aim of at least 6000 per day, and smart scalable. This means that we introduce more units of a given module, or more instruments in a given module if needed.

In terms of business continuity, we have established a strategic stock, standing orders, drop shipments and validated alternative suppliers. So we really needed a dedicated procurement officer to manage all that. For instance, for the RT-qPCR mix and the RNA extraction, we have two validated suppliers to mitigate supply chain issues. We also wanted our platform to be autonomous, so no A to Z commercial solution of which most of them are relatively slow or generally face worldwide shortages for instruments and consumables. Finally, our platform had to be high quality by default, but built for constant innovation and improvements through internal cross-validation.

Slide 8: Our platform consists of 16 instruments to process 6000 samples per day. For further scaling, we don’t need to linearly increase each instrument. For instance, our qPCR setup robot can handle many more samples as it is. For qPCR, we have not used the stacking module of the qPCR instrument, but validated it for stacking of a few plates (for DNA, we stack many more). Extraction scales per 2000 samples per centrifuge. This is what we call modular, flexible and smart scaling.

Slide 9: The most challenging step in the entire process is the transfer of the individual patient tube into a 96-well plate for downstream high-throughput processing in standard SBS format (8 rows, 12 columns). Not only because this is a very time consuming process, but also because of safety concerns. The inside and outside of the tube may be contaminated with active virus, posing a biosafety risk for the operators. Therefore, everything is processed by skilled operators in a biosafety cabinet class II.

One team of 3 operators can process 470 samples in a 4 hour shift. Processing means the transfer of an aliquot of swab transport medium to a deep-well plate, followed by the addition of lysis buffer, the first step of RNA extraction, which inactivates the virus and makes it safe to continue with the 96-well plate outside the cabinet. To process 3000 samples during an 8 hour working day, 18 operators are needed and 9 biosafety cabinets. Because of the highly focused work, shifts are limited to 4 hours. To reduce the dependency on qualified manual operators, we have also introduced automation.

Slide 10: Our Tecan Freedom EVO 200 robotic system has a barcode scanner to scan the patient tubes and the destination plate, and an 8 channel liquid handling module with liquid sensing to aspirate from the patient tube and dispense into a 96-deep-well plate, that has been pre-filled with lysis buffer using the 96-multichannel arm. This robot can process 3000 tubes in an 8 hour shift by 3 operators. This is a reduction by more than 6-fold compared to the manual procedure. Importantly, finetuning is critically required to handle the swabs inside the tubes and the very viscous nature of the samples.

Slide 11: To select a suitable RNA extration kit, we evaluated cost, easy of use, throughput, guaranteed availability and of course, performance. We went to great length in comparing 5 different RNA extraction methods on a standardized set of viral stock dilutions and positive and negative patients samples. Patient samples and viral stock were collected in 4 different transport media that are routinely used, including liquid Amies (eSwab buffer) and phosphate buffer saline. The results were quite shocking, with different RNA extraction methods showing very different performance; the transport buffers also differed greatly, with clear interactions between kit and buffer. On the left, I show you the serial dilution series of the inactivated viral stock dilutions from 10-3 to 10-5 for 2 RNA kits and 4 buffers. Cq values in the Y-axis, log10 of the dilution factor in the X-axis. The linearity and slopes are all excellent, but the intercept values differ greatly. The best and worst kit/buffer combination differs more than 5 PCR cycles, equivalent to a 30-fold difference in RNA detection sensitivity.

Slide 12: For RNA extraction, we finally settled on filter plates from Norgen Biotek or Zymo Research, processed in a centrifuge. We use a centrifuge with 4 positions for 96-deep-well plates. The epMotion 96xl dispensor is used to transfer the lysate to the filter plate, and a VIAFILL is used for quick plate dispensing of washing and elution buffers. The MANTIS is used to add 4 µl of spike-in control and carrier RNA. Carrier RNA is essential to improve extraction efficiency, especially of low concentrated samples. One operator can do 374 samples in 90 minutes. Three operators can thus process 6000 samples in an 8-hour working day. Of note, other automated or semi-automated solutions either require much higher investment and/or do not reach the same throughput.

Slide 13: For qPCR setup in a 384-well plate, we use a Tecan EVO100 robot with a 96 pipetting head (or MCA). 14 µl of mastermix is dispensed, followed by 6 µl RNA sample. This process takes about 7 minutes, but with all preparations, barcode scanning and entry into our LIMS system, and plate sealing, we count on 20 minutes for 384 samples. To maximize detection sensitivity, we opted for 20 µl reaction volumes. As indicated before, for business continuity reasons, we have two validated mixes, namely Bio-Rad’s iTaq one-step RT-qPCR for probes and Takara’s PrimeScript III one-step RT-qPCR.

Slide 14: The actual qPCR is done using 4 CFX384 qPCR instruments from Bio-Rad. We don’t need to use the stacking module, but have tested it, and for a one-step RT-qPCR, it seems we can stack one or 2 plates at room temperature without loss in sensitivity.

Slide 15: Data analysis is a huge challenge. You can imagine if you have to look at 6000 curves on a single day and interpret them for diagnostic accuracy, you need support. We use the FastFinder software from UgenTec that allows automated data transfer from the instrument to the cloud, uniform interpretation with a little bit of artificial intelligence, and importantly, it has numerous checks. It looks at the positive and negative controls. It looks at the values of the internal control, does trend analysis and neighborhood analysis to check for possible cross-contamination. The data analysis occurs at two levels. There is the interpretation level and the authorization level, and it allows coupling with our LIMS system and that of the clinical laboratory or hospital.

Slide 16: Our platform is approved by the federal institute for public Health Sciensano and the federal agency for medicines and health products (FAMHP). We also operate in a ISO17025 accredited lab, with accreditation for the test pending. We participated in a European quality assessment scheme and process blind proficiency samples from our customers. To safeguard our quality, we have multiple controls in each experiment, namely an internal spike-in RNA that controls for RNA extraction and RT-qPCR of each sample, and 2 positive and negative workflow controls per batch of 92 samples. Finally, we have introduced digital PCR as an orthogonal validation method and to calibrate our platform.

Slide 17: Talking about control samples, in the next 3 slides I want to illustrate how we use the Twist Bioscience control RNA. First, we use it as a positive control for RT-qPCR, whereby we spike 1000 molecules in well H12 in each plate. It not only serves as a control, but can also be used for trend analysis and inter-run calibration if required.

Slide 18: As one of the experiments done during assay development, we created a serial dilution series of the positive control RNA from 25,000 copies down to 1 copy in a constant background of spike-in RNA, followed by duplex RT-qPCR testing. This allows to determine the linearity or coefficient of determination, RT-qPCR efficiency, single molecule Cq value. It also allows to determine the limit of detection, define as the lowest concentration at which 95% of replicates are detected.

Slide 19: On the right, you see a visual representation of 32 replicates of 6, 3, 1 and 0 cDNA copies (digital PCR calibrated copies). As expected, we did not detect any signal in the negative controls, and detected all replicates with 6 copies per reaction. When using 3 copies, we lost signal in 5 wells. This means that our limit of detection is between 3 and 6 copies per reaction using this particular qPCR mix and E-gene assay.

Slide 20: Finally, we used the Twist positive control RNA also for qualification of our qPCR instruments to demonstrate fit for purpose and equivalence of the diagnostic test over the different instruments. Amongst others with testing a serial dilution series but also homogeneity plate in which we test 384 identical samples as shown on the right side. We want to see good linearity, efficiency, reproducible detection of limit of detection but also uniform Cq calling and end-point fluorescence values. Of note, we have a free app that is available on our website that can process these types of data from a qPCR instrument and actually does a functional validation of any qPCR instrument. Also, the link for that will be shared by the moderator.

Slide 21: To finish my part, I want to show what ongoing developments we have in place. Despite its slow mutation rate, SARS-CoV-2 accumulates mutations as it persists in the human population. It is important to evaluate PCR-based diagnostic assays as new SARS-CoV-2 genome sequences become available. Based on the COVID-19 Genome Analytics in Edge Bioinformatics from Los Alamos National Laboratory (USA), 99.5% of the more than 50,000 SARS-CoV-2 genomes are detectable with the E gene assay, and 98.03 withe the N gene. Using the 2 assays at the same time does not only slightly increase analytical detection sensitivity, but also strain coverage to 99.99%

Slide 22: The Belgian institute for public health Sciensano estimates that, early next year, Belgium will face 40 000 daily consults of flu-like symptoms for a period of 100 days (for a population of 11 million inhabitants, equivalent to 1 in 275 persons). Most of these will –hopefully- be free of SARS-CoV-2, but when doing a test to rule out COVID-19, it would prove valuable to determine if the patient is infected by influenza or respiratory syncytial virus. We are therefore developing a 7-target 4-color multiplex assay to co-detect SARS-CoV-2, influenza A and B, and RSV A and B.

Slide 23: With that, I conclude. We have set up a high throughput modular SARS-CoV-2 RT-qPCR detection platform. Primary patient sample handling is the major bottleneck, followed by RNA purification. The analytical measurements, meaning the qPCR, scale best. We have built a customized solution that is scalable, has lower cost, and ensure business continuity. Finally, control samples are key for assay development and quality assurance.

Slide 24: To finish, I want to acknowledge several people, namely the Belgian Task Force, several companies that have contributed instruments such as Formulatrix, BASF Innovation Center Gent, Bio-Rad, Ghent University, Inbiose, CIMIT, and Janssen, and finally the entire COVID-19 team at Biogazelle. With that, I give the word to the next speaker, and I look forward to the discussion.

James Grayson (UgenTec)

All right. Thank you, Jo and hi, everyone. Let me just share my screen here. Hi, and thanks again, Jo, for taking that. For the next few minutes, we're going to take a look at some of the complexity of high-throughput testing environments and some of the tools that can be used to manage that complexity.

Well over the last few months, we have all seen and experienced various levels of response to the COVID-19 pandemic. Over the last few months UgenTec has been fortunate enough to have been involved in supporting both local and national initiatives for COVID-19 testing.

What is interesting in this emergency situation that we're in are the challenges of rapid scale to enable in some cases countrywide level testing. Some of these complexities that we've experienced can include the variability that comes from different initiatives having to use various instruments, providers and manufacturers so having a mixed population of testing devices, variable assay kits from different providers so having both variable kits themselves but those kits might have some variability in the genes that they're targeting and have variability in their own control systems as well.

We definitely see a challenge in the availability of trained clinical staff to interpret and report results. Implementing standardized methods of measuring and verifying lab performance across multiple locations is critical in these cases. In the following slides, we're going to show how you can manage some of these complexity using instrument and assay agnostic intelligent software solutions.

To address some of the challenges of high-throughput COVID screening UgenTec's FastFinder platform has been used to support large scale testing initiatives. This FastFinder solution is comprised of analysis and analysis is an instrument agnostic, assay agnostic PCR data analysis and results reporting solution. Analysis does curve evaluation, results reporting and decision tree application which is the automation of your assay interpretation roles directly from your IFU or SOP. This is critical in these cases because in this high-throughput workflow where you may be supporting more than one assay, more than one kit, you may have various interpretation roles and we can digitize that to make sure that everything is being applied correctly. With that, we take a clinical recording far beyond standard instrument software.

Next we have workflow. Workflow is your laboratory's air traffic control system. This allows you to manage instruments and track samples through your testing process. We'll talk more about that in just a bit. Lastly, we have insights. FastFinder insights is your laboratory's operational performance tracking system. Insights provides standardized dashboarding in detailed lab intelligence. It visualizes key performance parameters like positivity rates for critical targets, QC performance across instrument fleets and sites, and testing turnaround time.

All of these solutions as Jo have mentioned is a cloud-based software solution and we are securely hosted and highly scalable because of Microsoft Azure and our partnership there. We are ISO 1344 compliant ISO 27001 compliant.

We're going to dig a little bit deeper into FastFinder analysis. Analysis is the primary component that is being used and was highlighted by Jo. Analysis is our artificial intelligence driven data assistance, data management assistance.

In a typical workflow, in a typical lab condition we see users having to interact with the instrument software, having to manually review and interpret PCR amplification curves, cross-referencing those against various flags that may or may not exist in the software and cross checking those against the rules and interpretation from either the IFU or the SOP. The idea behind FastFinder is we can digitize all of that and really just make that a one-click or fully automated action to go from raw data into reportable answer out.

Part of how this works for us is we apply artificial intelligence to interpret the PCR curve. In the simple straightforward case, we have sample thresholding which most instruments on the market use to determine the ECT or Cq. In the perfectly textbook case, that's very applicable but what I think we've all experienced through working in the industry is that PCR curves can be far from textbook. We use artificial intelligence to look at not a threshold but the various features of the curve, the slopes, just the behaviors overall, how things might have artifacts from instruments or chemistries or workflows and take that into account in interpreting the curve and producing either a Cq and finally a call for that curve.

Part of our process is training of the dataset which can be very specific to a given instrument or assay. It's easy to see that you could potentially call a negative a negative and a clean positive for positive but what do you do with the gray zone and how do you manage curves that aren't behaving in a textbook fashion? That's part of what our software does in that we can train the artificial intelligence algorithms to interpret curves that are clearly negative but it could be something like a creeping curve with an upward drift and do you treat that as a negative and what you do with that.

Here we see examples of the software interface and the important thing to see and to understand here is the software is agnostic. In this COVID case and these are emergency cases, we can support a significant number of instruments and from many different instrument manufacturers all with one interface and still have that same feel and touch that some people are used to as far as being able to see my curves, your plate laid out. Then what we do on top of that using the application of the decision tree is to be able to then combine the results, do the analysis and compare that against all of the validity criteria for the interpretation. In doing that, we can generate a final result and also flag any results that are atypical or anomalous or something that the end user would like to have called up for them and flagged for manual review. This is how we can support both the lack of availability of trained staff and standardized behavior of interpretation in having variable human factors involved in making a final call.

On the right here, we see some idea of how we build in this interpretation, our decision tree builder, how we can add rules and various Cq ranges to perform and that's something we can help and use to capture the rules of the IFU.

Next we're going to speak a little bit about workflow and workflow is your sample chain of custody across plates, instruments, and assays. With workflow, we're taking an entire look at all of the instruments involved in taking a sample from raw primary tube and to final result out. We can capture various workflows and present them in a dashboard so we can capture all of the steps of the workflow, all of the instruments involved. If possible, we can speak to those instruments and dashboard performance across the entire workflow and through the entire testing process. We can have views of recent analysis, logs of any issues or errors and just really have an idea from a dashboarding perspective of how the lab's operating and what steps any given sample can currently be in.

We can take that down to a very detailed level within workflow where we can track an individual sample through its whole process in the lab and we can do this in an automated way. We can see when a sample is first transferred from the primary tube to the plate. We can track that through extraction and all the way through PCR analysis, result. We have chain of custody and audit trailing through the whole process. We can do that both in an automated fashion or here in a manual fashion. Some instrumentation in these workflows are not automated, maybe do not produce files and we can add different audit tracking and trailing through the testing process to at least confirm that steps have occurred.

Another key feature of FastFinder is Insights. Insights is the operational business intelligence of the laboratory. We can use Insights on top of FastFinder, workflow and analysis to generate automated reports and verify the performance of instruments and operator's insights. We can do things which I think are critical here like monitoring batch to batch quality with control systems.

With Insights, we can provide standardized lab intelligence in which we can monitor things like testing rates, positivity rates and other key performance indicators for the lab. In this case, we can track over time the positive degrade of COVID that's passing through the lab. We can see the number of tests and performance, number of test, number of instruments, number of sites involved in the lab. We can monitor the individual behavior of controls like the positive control or negative control. In negative control, not so much I guess.

This is an example of a dashboard that can be used to track performance across the lab. Here we see that we have full access to a multiple site testing and everything is grayed out for privacy purposes. We can see the performance of individual labs across a multi site testing initiative. We can see what instruments these labs are using, in what rate. We can see different assays that are being tested and all this can be monitored remotely from a central location if there's an overseeing body.

Here we see something that I think is very critical to the overall concept of a national initiative such as COVID-19 testing where we can see the behavior of positive controls over time by instrument, by target and really look for variation. We can flag deviation in behavior from the system. This is something that can be used for trouble shooting, for monitoring individual performance at sites. It's something that has been used in various proficiency monitoring programs that are implemented in some of these testing initiatives to do ongoing QC of the lab and across sites and initiative.

In conclusion, compiling and implementing in a high-throughput testing program comes with significant challenges and resource availability and resource management and program standardization and management. We have worked with many high volume initiatives to mitigate those challenges with intelligent software. Thank you for your time and back to you, Chris.

Round table discussion

Chris Thorne (Twist Biosciences):
Thank you, James, and thank you, Jo. That was two very interesting talks. I very much appreciate it. Looking forward to discussing those further now but before we get into that I just want to begin by welcoming Rebecca Nugent who joins us today from California and is Twist Life Sciences Research and Development Director. She joined Twist after spending many years in biofuels and green chemicals industry and leads our R&D teams focused on the development of synthetic biology and next generation sequencing target enrichment product. Rebecca, welcome and thank you for joining us.

Rebecca Nugent (Twist Biosciences):
Hi, thanks for having me.

Chris Thorne (Twist Biosciences):
I just want to begin if I may with a general question. We've seen two quite technical presentations there and I think it highlights the amount of work that's gone into getting this all set up in the last few months but I guess, we won't hear nine months ago, none of us were working on COVID-19. I think we've had to move the speed to make this happen and I wondered and Jo perhaps I could direct this question to you to begin with. There's a tension here. We're dealing with this pandemic between needing to get these workflows set up at speed and making sure that we can address this need as quickly as possible with needing to be as rigorous as possible in terms of the workflows that we are using. I wonder if you can just maybe just comment on whether you have seen that tension in terms of the work that you've been doing and how you guys have approached dealing with that.

Jo Vandesompele (Biogazelle):
Yeah, excellent question indeed. We experienced a lot of tension and pressure. Setting it up in less than two weeks obviously we can not get to the same diagnostic quality as a lab that has been doing vital diagnostics for 20 years. If they implement a novel test for this new virus that's much easier to have it in embedded in a lab with experience to do so. What can you do is controls, controls, controls and validate every step as good as possible. I think we have to be honest that we improved over time. It's impossible to do that in two week's time. Having control samples and knowing what to expect when something is negative when your sensitivity is at stake or specificity et cetera, it allows you to see how well you're doing. Exchange samples, that's what we did very early on with the labs that were already up and running. Do some kind of ring testing, exchange samples, participate in proficiency schemes and learn and see where you can improve. It's an ongoing activity and I think we still doing that. We can always perfect, we can always improve from many different aspects and we have done so over the last four or five months. It's an ongoing activity to learn and to improve your diagnostic test.

Chris Thorne (Twist Biosciences):
Yeah, and Rebecca perhaps I might put that question to you as well. One of the underlying principles of syn bio is this process of design, build, test, learn. Can that still be applied in the context of a pandemic? Do we have that luxury or do we have to get it right first time you think?

Rebecca Nugent (Twist Biosciences):
That's a really interesting questions. You know, yes, I think you can. I think Jo just answered that question that you set your baseline, you do your first experiment and then you see how you can iterate and improve on that first experiment. Jo's response really resonated with something that we did at Twist was originally when we developed the synthetic controls, we went from concept to commercial launch in about three weeks. We've never done that before. It was incredible effort and so what we learned from our first iteration of building our two control strains. We then applied that knowledge to the other controls that we released which are just genetic variants of SARS-CoV-2. Then we took that knowledge and we applied it to more controls for different viruses with the respiratory controls. Yes, there is room for the design, build, learn, test cycle even at the pace in which we're moving at this pandemic.

Chris Thorne (Twist Biosciences):
Interesting and James, one of the slides which jumped at me was towards the end of your presentation where you showed how your software allows you to look across multiple labs potentially were using the same software. Has it been your observation that these labs are able to ... how are labs learning from one another and are you able to feedback when things aren't going quite so well? Is there a collective effort to improve that you are seeing from your perspective?

James Grayson (UgenTec):
There is and it comes in a couple of different ways but definitely what we're seeing that works is these proficiency monitoring program. We see an over riding, monitoring QC program that is put in place using various controls to really track how individual labs are performing and then using that data to see how to make improvements to find maybe issues with systems. We've been asked to do things like flag for different error types and bring those to people's attention. I think it's a really interesting mechanism for optimizing performance for truly trying to get to the best quality of results using the variable both instruments and reagents that are available in this situation.

Jo Vandesompele (Biogazelle):
If I may add to that it's amazing what you can learn by looking at lots of data, trends that you simply don't notice first of all if you don't have the tools but with massive amounts of data you see things that you didn't see before.

Chris Thorne (Twist Biosciences):
Jo, do you have an example in mind? I'm just curious.

Jo Vandesompele (Biogazelle):
Sure, repeated freeze-thaw cycles of reagents, a new primer that was purchased synthesized. You see performance difference if you accrue that over time and you have many measurements with that same lot something that remained unnoticed to us before.

Chris Thorne (Twist Biosciences):
That leads me on to another question which is you show the use of the controls in your 96 well plates as in process control but do you see there being a need for ... you obviously have a multistage but are there controls applied across the wet flow or are there gaps that you think there would be a need for different controls to get that level of oversight?

Jo Vandesompele (Biogazelle):
It's a good point. We can discuss a lot about controls, the ideal controls. I think there are hundreds of different control materials available and there are also different schools of thought here. We definitely have workflow controls from the very beginning. Our negative controls, they're really the sample transfer is a negative sample that really goes to the primary sample transfer, RNA extraction, RT-qPCR, data exportation. Then we include an additional negative control and our positive controls in the very beginning we only use the positive control from you guys which came after RNA extraction which is an RT-qPCR control and a data analysis control but it didn't control for extraction. That's why we have patient samples now of highly positive patients that we dilute and that we introduce in each plate as a more representative sample that undergo primary transfer, extraction and RT-qPCR. I think there's room for controls at each step of the workflow but it's important to consider that not all controls control everything you want and that's why you need to build in multiple controls.

Rebecca Nugent (Twist Biosciences):
You mentioned negative controls. Can you go more into the importance or the differentiation between a positive control and a negative control and the type of considerations that go into defining what those actually are?

Jo Vandesompele (Biogazelle):
Is this a tricky question?

Rebecca Nugent (Twist Biosciences):
No, it shouldn't be a tricky question. I mean so from my perspective, I mean, go ahead but I can clarify more if you'd like.

Jo Vandesompele (Biogazelle):
Well our negative controls, I'm not sure if they are the best ones or if you could consider other options but for us it's a water control in the same buffer as the patient sample because the matrix has an effect as well. The ideal negative control would of course be a patient sample which is confirmed to be negative because then everything also the human RNA in that sample may have a specific matrix effect. It's more challenging and difficult to do so, so we use water as a negative control. I think the most important here is to show that you don't see signal when you don't expect one. False positives are clearly a huge problem and they come from two sources. It's reagents and it's well known, it's documented many of the suppliers have worked hard on that that their reagents, primers, probes, PCR mix, extractor reagents even were contaminated by probably ramping up their production by synthesizing so many synthetic template to cater for the entire world that all the I think production facilities, not all but many of them, were contaminated with SARS-CoV-2 template. You need control to make sure that your lab, your reagents are not contaminated but also cross-contamination is a genuine concern especially in high-throughput lab. I know it should not be there. We don't want it but some of these patients have such a high viral load, billions of molecules actually in there so the tiny drop of a femtoliter can create a positive signal in a neighboring well. That's why you need controls to try to capture that. We do that in two levels, our negative controls in the process but also using the software that will flag a warning if two positive samples are adjacent to each other which alerts us and we can then make a call whether we should send them for repeat extraction or not.

Rebecca Nugent (Twist Biosciences):
That's exactly what I was looking for. When we were developing our respiratory controls, one of the things that we were really keen to demonstrate was that it could be used as a positive control to isolate or basically detect viruses like influenza and a bocavirus but we also made sure that it was a negative control for a SARS-CoV-2 because cross contamination is an issue, sensitive detection is also a great issue to raise. I love your point right there in talking about cross contamination and how you use the software. I guess James the question for your or just to shift this over to the software side, what kind of flags or warnings do you look at, you were talking about the gray zones for those samples that may be positive or may be negative. Do you go into that data when you find two positive results next to each other?

James Grayson (UgenTec):
Sure it's a little bit of a two-part question based off of how you asked it. First is yeah interpreting the positive itself so is this positive? Is it clearly a positive from a normal sample? Is this a late positive that could be related to a false positive result so something that really amplifies really late? Then we do a proximity evaluation. We will look at the space around a high positive and a low positive and see if there's any chance that this could have been a local cross contamination and we will raise that flag as one of many different flags that we can put in place to bring attention to the reviewer to check this. That's really built into something that we call the decision tree which is working with the end user to build a set of logic. How do you want to this reported? If we see this morphology, if we see this pattern, how do you want us to treat it? Do you want us to bring it to your attention? Do you want to immediately invalidate that result and send that back to retest? It is a lot of working in collaboration with the end user to make sure that we've captured how they would like to interpret data within the software itself. But using the intelligence, we can capture all the information and flags and do a smart crosscheck. Does it make sense that this could be a cross contamination from a local neighbor? I don't know. To me that's the really cool part about the analysis and the intelligence.

Chris Thorne (Twist Biosciences):
I guess, off the back of that series of questions about how we've come to this place, I want to ask all three of you whether you feel that some of the learnings that you've taken from the last six months are going to be things that you can take forward both in the context of your current work but also into different disease areas as well. Perhaps if I could begin with Jo. Obviously Biogazelle has been doing this kind of work for a number of years but perhaps not at this scale. Do you see yourself continuing in this manner or what are the things that you as a company want to take forward hopefully when the pandemic is behind us?

Jo Vandesompele (Biogazelle):
To make it clear we are qPCR experts. We design, develop, validate qPCR, digital PCR tests however, we have never done diagnostics before and that's a really different ball game not only the massive scale that we are operating under but also the more strict requirements of turnaround time and quality or guaranteed quality as doing the molecular diagnostic test is really different. We definitely have learned a new way of doing this in a high-throughput, short turnaround times with superior quality that a molecular diagnostics lab working with the lab really demands. This is the experience that we'd want to bring to our clinical trials. We do a lot of clinical trials for our pharmaceutical or biotech partners but this is also very similar. Perhaps the turnaround times are not that strict in the clinical trial but often the throughput and the quality is very on par. We definitely have learned a lot of lessons that we can bring over to the clinical trial division. Also our LIMS system was I would say not perfect before and we had to make it perfect today to track every single detail from a new dilution. This is also very helpful for future work.

Chris Thorne (Twist Biosciences):
Okay. James, what about you? Is the FastFinder software just applicable to infectious disease or would you be expanding or perhaps it's already applicable already?

James Grayson (UgenTec):
In the before times, before COVID, FastFinder it is a clinical diagnostic tool. We started FastFinder and the analysis portion of it really focused on the clinical market and results interpretation or reporting but, before all of this we had also grown into different areas like oncology, PGX, we support ag bio, veterinary testing and a lot of different applications and a lot of different modalities not just rtPCR but melt curve and various other PCR-like adjacent analyses. I think from that side, it's nice to be out in all of these different industries. Specifically, this COVID pandemic, as a software solution, as a clinical solution, it's been good to see that we could scale up enough to reach hundreds of thousands of analysis per day globally with some of the programs that we're involved in. What I really see us developing over these last months though has been the insight software. It's been that business intelligence layer to really track performance across multiple sites, across multiple locations and labs and modalities that I think has been critical and hopefully we can continue to implement for whatever comes next. I think that's where I'm very proud of UgenTec and my small part in it and small part to play in the whole pandemic as well. It's been good to be involved.

Chris Thorne (Twist Biosciences):
Rebecca, so Twist is a DNA synthesis company and here we are making RNA. Tell me, what are we taking forwards as far as you're concerned and your teams learnings as well?

Rebecca Nugent (Twist Biosciences):
Yeah, we have learned how cross functional our platform is and how just ubiquitous DNA is as a tool. DNA is the starting material for a variety of different tools. In this case, we use DNA as a template to make the synthetic viral controls. We also used our platform to create next generation target enrichment products so that we could detect and characterize different viral genomes. I think the takeaway that we've learned from here is our ability to use our platform in a very speedy method, in a very speedy way to respond to the challenges and the threats that our society faces and also as a tool we can use target enrichment and then the synthetic controls for detection and characterization of emerging pathogens or viral threats.

Q&A Session

A question from one of our participants, "How much data is being used to train FastFinder?" I guess this is a good question to ask now because perhaps you're getting more data than you would have been six months ago. Is this also proven an opportunity to improve the artificial intelligence algorithms or perhaps you could speak a little bit to that.

Yeah, and to be clear this is not my personal area of expertise, the artificial intelligence and all of that but we've been on the market with FastFinder with the artificial intelligence and curve interpretation for three to four years if not more. I lose track sometimes. We have really out of box performance, right? For interpreting those curves, we're at a very high level of accuracy. Where we train and where we try to pick up on performance and where there's always room to grow is various workflows and various chemistries and various instruments that are all strung together. Some of those things can, I think Jo had mentioned just the behavior of curves with different kits paired with different buffers and how that variation can lead to different morphologies and curves. That's where we really target and work with the individual workflow and the individual users of that workflow to capture the expert's interpretation of those corner cases. That's where we're always learning and always picking up is how do you pull in those unique corner cases that aren't the perfect PCR curve.

I have a question here for Rebecca regarding the controls from Ali Bektas just wondering if you could go into a little bit more detail about how the synthetic controls are themselves QC? Obviously we want to supply them to Jo so that he can use them out the tube. What are we doing to ensure that our tubes is what we say they are?

There are multiple QC steps along the process. One of the QC steps that we use is next generation sequencing to confirm that the sequence of the viral genome is what we think it is. We also use qPCR. We're transitioning into digital PCR to confirm the relativity, approximate amount of molecules that we're distributing into each tube that we deliver to that customer. One sequence verification and then two, verification of the number of copies per microliter that we're shipping out.

Subscribe to email updates