Skip main navigation

Scaling up: implementation and how to overcome limitations

A group of researchers share their experiences scaling-up SARS-CoV-2 sequencing
16.6
And what do you remember about scaling up? Do you remember sort of why that–
20.6
Sure: panic! Yeah. Yeah. I think that obviously scaling up coincided with obviously, that first winter with the amount of samples that we were getting coming through the introduction of the 96-barcode kit so that we could actually do more samples. And as well, obviously, the transfer from the MinION when the GridION came. Actually meant that it was feasible for us to scale up at that point. Yeah. And we could suddenly run five libraries in parallel. And I’m sure you both got very annoyed at me when I kept saying, yes, when we were asked if we could take more samples from more NHS sites. Definitely. Can we do this? Yeah, we can do this.
63.6
But I think that’s just the way research goes with sort of any organisations. You’re always saying yes to something and you’re always open for change. I think a lot of the NHS, lots of diagnostic labs that would struggle to make those changes very quickly. Obviously, they are very restricted on what they do. And they do things very well, but in a certain way, to be able to adapt that way. I think having a team that was dynamic as it was really helped with bringing those changes in so quickly. Yeah, absolutely.
98.1
And also, we were lucky in a lot of ways because we did have that time at the beginning to build a foundation of knowledge before we then had to scale up. So by that point, we were kind of well versed in what we were doing, so scaling up to the next step was more achievable. And we were lucky actually that at some point we were able to hire three new members of staff and then that really helped us to kind of coordinate who does what. Because especially, as I’m sure people will know and find out, that the library prep is so intensive in terms of the barcoding and the A-tailing steps for the pipetting.
131
So we would accept two people on that, working in parallel. So one person would be doing all the preparation for the first dilation and the A-tailing. And then the second person would be preparing the barcoding. And that kind of really sped up that first section of the day. And then we could process samples more accurately because you’re not overworking one person, and a lot more rapidly.
153.6
And I think probably having those extra staff members come on board, it really helped to build the team up because starting from– Because obviously, it was just the two of you in the lab, and myself working from home, attempting to homeschool an eight-year-old while developing pipelines, bioinformatics pipelines, which that in itself was very difficult because it sort of hadn’t been done before. It was very much being thrown together and pieced together from bits of Perl scripts and sellotapes and hopes and dreams, and just really hoping that this process would work.
189.5
And I still remember, like you say over the winter, when we started on the HOCI Project, the Hospital Onset COVID Infection Project, which had obviously a requirement for doing much more rapid throughput testing. We had to completely rehaul the entire pipeline for the bioinformatics. But nowadays, as time has gone on, companies and other people have developed very, very robust pipelines for doing those analyses which makes it far more straightforward and far easier to do at high levels, which whereas we were trying to do it very much, trying to build the plane while we were– Build the wings while we were flying the plane. Yeah. I think that was a lot of the process again.
237.6
As you use the protocol more often, you understand that there’s areas where we could change things and make things better, but you could only do that and run those while you’re still running all the samples at the same time. So it’s kind of that sort of same thing that you’re trying to make changes for the better while still running those samples at the same time. So you’re doubling your work when you’re trying to make those small changes. Yeah. But I think as well, you’re right.
264
Every time we run an experiment, we learnt something and we made a change for the next experiment so we were always finessing and fine tuning and finding a little thing that we could do to save like one piece of time. Even for the dilution step at the beginning, instead of we would put all of the nuclease-free water in a reagent reservoir and use the multichannel to dispense, to kind of save all that pipetting for like 45 microliters, 96 times or whatever it was. And creating these really comprehensive tick sheets for the protocol, so we could easily follow where we were at as we were going along and not get lost. And it really was a stepwise process, wasn’t it?
303.6
And improvements as things went on. So every time we made a mistake, and mistakes happened a lot, and issues happened with runs, and obviously being clinical samples, we have to make sure that the data coming out was accurate. So if we had issues with contamination, anything like that, we just had to start again, fix the problem, and get it right. And we really did develop those processes for improving it by getting it wrong. Yeah, we did make a lot of mistakes. There was a lot of that. Especially in the beginning. I would say, look, you know, I mean, you had done a bit– You had some experience sequencing before but not with nanopore. I hadn’t sequenced before.
344.7
So we really were learning as we went. Yeah, OK, some of those lessons came the hard way. But actually, I think as long as you’re taking on board the lessons learnt and changing for your next experiments, then it doesn’t matter too much because it’s going to happen. Yeah, I think, sort of those things as well, as the things that you kind of forget like lot numbers of everything else when we were working through, trying to work back when your records aren’t very good. It’s very difficult.
373.8
So having really good records at the start, even though it might be time consuming and really onerous, that actually when something does go wrong, it’s very easy to go back and find those where you’ve used the same reagent, and you found out that it’s not worked on all of them. You can find out, especially those very expensive reagents, what one is likely to be the cause of your contamination, then remove it. And it’s that process that can really slow you down when you’re constantly having sort of contamination issues. Whereas, not necessarily have the money to keep on repeating experiments time and time again, or have the sample to do so.
409.1
So it’s having– The time, when you’re working with such a huge number, that became a real problem when the case numbers increase. Because all of a sudden, there was no room for error. If you made a mistake and had to rerun a library, there was no room. We were at maximum capacity. So trying to fit everything in, we really had to work hard to make sure that those problems did not occur in the first place. And you’re right, the record keeping and things like using single-use aliquots, taking the time ahead of time to make sure that those errors don’t happen will take time in the first instance, but save you a lot of time in the long run.
447.8
Definitely saw that when we started doing the high throughput, because by the time we realised there is a mistake, you’re three or four experiments down. And so there’s that chance then you’re going to have to go and repeat, and you don’t want to have to turn around and say, oh, we need to stop everything now and think about where those problems are because you can’t continue knowing there’s an issue. But you still need to continue because the samples are coming in and they’re racking up, and there’s thousands of samples that are outstanding in the freezer and they’re not going away. And particularly with the HOCI project where turnaround time was key.
478.4
And I think we did really well with HOCI in returning the reports within that time frame, with only a few cases because we ran much. So most of the throughput that we got was through use of the 96-barcode kit so that we could run lots of samples on a single run and have them all running in parallel. But with HOCI, because we needed that turnaround, we had to do much smaller runs. So I think we normally did maximum of about 20 samples per run? Mm-hmm. So all of a sudden, again, if you have to repeat that from scratch, that’s another day on top of the turnaround time. So we had to limit that where we could.
521.2
And really, I think, one of the things was, the team that we put together really helped with that. I think we were very, very lucky with the people that we had on the team, both the two of you and the fact that you’d had that time at the start to develop the process, and to have that information to pass on. And the three other members of the team, Kate, Salman, and Chris, they were really keen and really quick to pick up the lessons that were being taught to them.
549.1
And I think having that team on board over that winter, without that, we wouldn’t have been able to get to– I think we were running about 1,000 samples per week over that period.

In this discussion, Sharon Glaysher, Sam Robson, and Angela Beckett from the University of Portsmouth talk about the strategies they used to scale up their testing processes.

Have you had similar experiences in COVID-19 sequencing or any other project you had worked on? Can you imagine using their tips and experiences in future projects? Let us know in the comments.

This article is from the free online

From Swab to Server: Testing, Sequencing, and Sharing During a Pandemic

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now