Welcome to the (Virtual) Team

I hope everyone has been adapting successfully to their new work environment. I’m going to go ahead and assume you’re all working from home or some iteration of non-standard work practices at the moment. We’re almost a month into fully remote work and narrowing down our best-practices and preferred technologies. Zoom has featured heavily even though MS Teams is freely available. Zoom has the added benefit of being able to see everyone at once in a meeting of over 5 people. Apparently Microsoft is working on this feature for MS Teams but has yet to implement that. (Hurry up Microsoft!) I also came across this stellar article recently: https://careynieuwhof.com/my-top-7-rules-for-leading-a-digital-team/.

However, as normal as this has started to feel, I have a new challenge coming soon, onboarding a new employee remotely. There are several unique challenges that I’m encountering with this new process. The first one being, how do I ensure this employee gets all the appropriate paperwork signed and has access to the needed equipment?

Fortunately, the organization I work for has been ahead of the game in preparing for “shelter-in-place”. Non-essential employees were sent to work from home before official orders came in from the government and support services such as IT ramped up sufficiently to make sure that systems such as the VPN didn’t crash and burn. Similarly, HR has stepped up and is doing onboarding remotely with ID cards, etc being issued later. Which is fine since we’re not supposed to be on campus anyway. OK, one thing down, numerous more to go. We’ve even figured out a way to get the employee a re-imaged and ready-to-go laptop. Whoohoo!

Working in the regulated environment of clinical trials means there is always additional paperwork that needs to be filled out prior to anyone starting work. In the case with any new employee, there are signatures required on documents for the Quality Department before training on SOPs (Standard Operating Procedures) can be done and employees must be trained on the SOPs before they can be trained on the work. One would think that in an organization without a validated esignature system, this would present a rather significant hurdle. Again, smarter heads prevailed and the Quality team came up with workarounds both for signatures and for the essential paperwork needed before SOP training can begin. So, now we’ve knocked two items off the list.

Now come the really tricky parts. 1) How do you train someone virtually? I am sure that there are teams and organizations where this is second-nature but that is definitely not our experience. The first part of training will work out well. I’ve developed a reading list for all new employees that details required reading in order of priority. For my team, it’s all the SOPs first, then introductory scientific articles to HIV in general and to the research being done by our partners, then protocols and related documentation for the studies the individual will be working on. Next, any new employee can move on to a training matrix that includes both online training modules and additional reading (such as team Work Practice Guidelines covering everything from days off to Slack and Jira usage) to introductory meetings with admin staff and other teams. Obviously those meetings will all be virtual now.

When it comes down to training on the actual data management, that’s where we’re going to have to be a bit more creative. We’ve developed documentation on how to review specimen data, how to generate reports, etc but I’ve found that the best training is shadowing existing staff and getting to ask questions in real-time as the tasks are being performed. No SOP or instruction document can adequately describe the intricacies of data management or account for all the possible scenarios of why a data discrepancy was generated. When so much of the work involves problem solving, how is do you teach that virtually? The best solution I have at the moment is to try shadowing virtually. I think that the technology is up to the challenge with screen sharing and virtual whiteboarding available, so I’ll put a check in that box for now.

Speaking of all these virtual meetings, this brings me to the next challenge, 2) How do you integrate someone into a team virtually? As well as the team has been doing with virtual coffee chats and happy hours and Ted talk watching, they all had an existing relationship prior to going virtual. It’s yet to be seen how someone will be integrated into the group in a purely on-line setting. Of course I’ll try a few Zoom meeting ice breakers but I think this one is very much TBD challenge. No check marks here. One strategy I’ve been pondering for the team anyway is called silent meetings. Since I have a range of personalities on the team in terms of those willing to speak-up in meetings and those who are less so, I gravitated to this concept when it popped up in an email last week. The basic premise is that you email a question ahead of time and gather responses. The facilitator then passes out the responses (without names) at the beginning of the meeting and everyone takes time to read them. The team then identifies main ideas in the responses and the facilitator writes them on a whiteboard and the team stars the ones they identify with. This is used to focus the subsequent discussion. Here’s a link to the article: https://slab.com/blog/silent-meetings/. I’m not sure if this will help exactly with team integration but if the new member is shy about speaking up at first, this could be beneficial.

The last challenge I’ve identified so far is 3) how to assess performance and the retention of training? Now this might seem like a problem that is common to the whole team but with the other members, I’ve had a bit of time (sometimes quite a bit of time) to know and understand how they communicate, how to tell when something isn’t going well, what their usual sticking points are on the work, etc. With someone new, this is much harder. I won’t be able to tell when they are saying everything is fine when it’s not or feeling dissatisfied with the work, at least not as easily. I’m in the habit of walking around and checking in with the team when we’re in the office and the often gives me clues as to what they are doing as well as how they are doing and allows for more spontaneous updates and conversation. It’s going to be a bigger struggle building that trust and relationship with someone new while we are all working remotely. This is an aspect of the whole situation that is going to require some more research on my part. Definitely no check marks here as I don’t have anything even rattling around in my head yet about how to tackle this one. I’m open to any and all suggestions.

So, to recap, when onboarding an employee virtually:

  1. Work closely with HR, IT, etc. Don’t be afraid to ask for help, propose creative solutions and be open to thinking differently about how this all can happen. Everyone wants to make this a smooth experience for a new employee and are often willing to help (example: our IT department is printing out some material for my employee since they didn’t have a printer at home).
  2. I didn’t actually mention this above but it’s a running theme for this whole “virtual teams” situation. Communication is key. Even though I’m not 100% sure how this onboarding is going to look, I’ve been in contact with the employee and letting this person know as much as I know about what the first day will look like. Information is always appreciated, even if it’s incomplete information.
  3. Having standard training material already developed and ready to go is a life saver. It was so nice to have one less thing to worry about and to know that this new employee was going to get the same information that the last 3 new employees got and that it was going to contain everything they needed to get started. Additionally, having a template of key meetings to set-up made that process go a lot more smoothly as well.
  4. Integrating a new employee into an existing team virtually is likely going to be a bit tricky. I’m going to have to pay special attention to make sure this new person feels included. I’ll probably try some ice breakers and will definitely try out silent meetings to even the playing field in meetings for everyone on the team.
  5. Assessing performance and training retention is truly an unknown for me at the moment. Any and all suggestions welcome.

Back to Basics

One of the interesting side-effects of having a done a PhD, aside from some unwanted flashbacks whenever I come within view of a flow cytometer and the uncanny ability to know where the mouse facility is in any research facility based on smell alone, is my iterative approach to pretty much everything. Having spent a good portion of my career in research science, I naturally approach everything as an experiment, gather the data, analyze the data, re-adjust my hypothesis and approach and run another experiment. It’s become so ingrained into my problem-solving methodology that I don’t realize I’m doing it most of the time. How that translates into the managerial work I’m doing now is that it appears that I’m in a constant state of process improvement. This is basically true, I’m constantly running little experiments, trying to solve a bigger issue or achieve a larger goal. I’m sure this drives some people on my team nuts as they are constantly hearing, “can we try this another way”, and I’m also sure that there’s little chance that I will fundamentally change this behavior anytime soon. My task is to make sure that the end goal is always in sight and that the team knows why we are iterating and the results of the iterations, etc. But I think that’s a topic for another post.

My objective in leading off with an explanation of my general thought process is because that is how I came to a new initiative with my team that is very relevant to what I’ve been discussing here. I recently obtained the DAMA Body of Knowledge (DMBOK) and am casually studying it with an eye toward certification. The Data Management Association International (DAMA-I) has been around since the 1980s and is comprised of data management professionals from around the world. They offer certification as well as annual conferences. Anyway, the excessively large book began with a discussion on the definition of basic terms. At first I thought this was a little too basic until I delved deeper into the reading. It wasn’t that the definitions of basic concepts like “data” and “metadata” were so profound that I thought they encompassed all possible scenarios for the use of those words or anything transcendental like that. While I was reading through the rather dry text, it occurred to me that my team hadn’t set these definitions for us. We hadn’t laid the foundation for our data management practice by collectively agreeing on what we meant by “data”, “metadata”, “data management” and “data management principles”.

I decided to start an Intro to Data Management information sharing series with the team. It’s a once a month session in our team meeting that will go through fundamental data management components, starting with the basics. I call it information sharing rather than “training” or “education” because I truly want it to be a discourse on best practices and a re-imaging what these dry, textbook definitions and theories mean for us in our practice of lab data management. This is probably iteration #3 of our team meeting and I know we’re getting closer to a framework that is meeting the team’s needs.

Sidebar: This brings me to another iteration of practice that I introduced recently that is gaining some traction. I came across an article about silent meetings. (See resources here: https://hbr.org/2019/06/the-case-for-more-silence-in-meetings and here: https://www.businessinsider.com/better-idea-silent-meetings-work-2019-1, just to name a few.) The concept struck me at once because of the mix of participation and non-participation that happens in most of the meetings I facilitate. OK, let’s be honest, mostly non-participation. Having come from an organization where participation in meetings was considered an essential skill and the CEO valued “a good dust-up”, this culture of passively listening and head nodding was totally foreign to me. I knew people had good insights and ideas, they just weren’t coming out in the meetings. When I saw an article about silent meetings in a daily email digest I get, it was a definitely a light bulb moment. You can read the links for details but the benefits of this type of meeting are that those who tend to be quiet in meetings can offer their ideas and you avoid the bias of everyone echoing the opinion or thought of the first person who speaks. So far the results have been good in the meetings and the feedback from the team has been overwhelming positive.

Now, after that rather large squirrel, let’s get back to data. So, defining the term data may seem like the most obvious thing in the world and not worth the time it takes to write data. Let me offer this anecdote to illustrate why the time might be worth it. Where I currently work, we get a large number of data requests. We have formal procedures in place now but not too far in the past, email requests would come in from a researcher asking for “the data”. We would email back and say, “OK, here is the raw data you asked for.” Nope, that was not what the requester wanted. Attempt #2: “Here is the processed, standardized data set.” Nope, that wasn’t it either. Attempt #3: “Here is the exact analysis data set used with the metadata attached.” Nope, that wasn’t it either. Attempt #4: “Here are the results (interpreted data)”. That was it!! Finally!! Now of course there are a number of issues with this anecdote including communication, intake processes, triaging of requests, etc. What I would like you to take away from it though is the importance of having a shared vocabulary and how much time it saves in redone work.

Defining terms such as data, metadata, data management, etc is extremely important no matter what kind of work your organization does. Obviously, for a data management and analysis organization, this is absolutely critical. But, I would argue that this is critical for all organizations. If you do not define what “data” is for your organization, all sorts of assumptions can be made about how data moves through the organization, how it is secured, how it is managed, how it is retained. The DMBOK defines data as an asset, “an economic resource that can be owned or controlled and that holds or produces value.” As such, it should be defined and managed. The organization I work for has done an immense amount of work doing just this, defining data, and metadata and data management. It had not trickled down into my team yet. We hadn’t taken those definitions and translated them into ones that were specific to the lab data we manage. Using the silent meeting structure, we defined what data meant to us in the context of the lab data we manage, what metadata was, how data management was defined and what data management principles aligned with how we (and the whole organization) work. I got feedback after the meeting that both the format and content were well-received.

Now is where this post comes all together. Getting back to basics means:

  1. Defining what “data” and metadata” mean to your organization. What do you include in your definition of data and metadata?
    1. Be comprehensive here. Assuming that data has to numerical or electronic can be misleading. There is a lot of “data” that we would have included in the definition not so long ago, names, addresses, the number of times someone visits a website, personal preferences for clothing, etc.
  2. Define what data management means to your organization and decide on your data management principles.
    1. What are the key components of how the organization should manage data?
  3. Outline your data management principles
    1. These should guide the data management practice
    2. Reflect on how your data assets are used or can be used to reach your organization’s goals
    3. You don’t have to do this from scratch. There are lots of good resources out there to use as a starting point.

You can use a number of different types of meetings, asynchronous communication, etc to get at these definitions (hey, why not through a silent meeting in there?). Laying out this foundation for data in your organization and making sure the whole organization (here I mean group, team, whole company, whatever you have influence over) is clear about this foundation, ensures a solid ground on which to build any data project.

So, care to run a little experiment with me? Can you define data for your organization? Do you have clear and consistent data management principles? If not, join me in enacting a bit of change, one little iterative experiment at a time, starting with the basics.

Click Here to Join the Meeting

Update: Apparently what happens when you post something to the internet saying you can’t find sources of information is that they then fly out of the woodwork (or internet ether in this case). Since posting this, I’ve come across some great resources about working from home. I can’t say whether that’s because of the significant increase in people working remotely in the past week or not but it has been helpful. I’ll post a list at the end of the blog. (See what I did there? Now you have to read it.) Two things my team is doing now since I wrote this are daily stand-ups in Slack. I even learned how to program a reminder in Slack for those, which brought my Slack skills up to intermediate and I’m so proud! (reference below for instructions) Also, this is a way for us to connect each day with a short round-table of successes, challenges and things we’re looking forward to. Plus, it involves emojis! We’re also going to try a team movie night…well…not really a night since some of us have young kids but we’re going to watch a Ted Talk together using MS Teams. Everyone will share what snacks they have first and use the chat feature to converse during the talk. I’ll let you know how it goes.”

So far I haven’t had the time to sit down and map out the data management posts that I want to write about this year. There are a lot of topics that I want to cover to continue to discuss the need for data management in research and what my team does with lab data. But that’s a post, or posts, for another time. I have something else in the forefront of my mind related to managing teams.

My work has been one of the companies that has recently asked employees to work from home for an extended period of time due to the potential of coronavirus spreading. Fortunately, we already had remote work policies in place, a technology suite (Slack, MS Teams, etc) and best practices around virtual meetings, and a culture of including remote workers so the transition has not been too bumpy. Aside from making sure people have monitors, etc, there hasn’t been too much interruption in the day-to-day work. In fact, I was pleasantly surprised with how smoothly all the virtual meetings have been going so far (minus forgetting about a video meeting and looking like I just rolled out of bed).

Since I’m not one to just take a good thing at face value, I’ve been pondering how to keep engagement up and the sense of team cohesion going through this extended work from home arrangement. Of course I immediately went to do in-depth research on engagement on all remote teams by Googling it. Snicker if you will but Google does bring up some great ideas, especially if you know what sources to trust. So I wade through the Google search results for hits from sources like Harvard Business Review, Business Insider, Forbes, and sometimes even articles on blogs from business like Slack or Smartsheet.

There were some good ideas in the articles I found, fun games to play and ways to connect online. To be honest though, not as much as I would have expected given the changing landscape of work. There were more articles about managing remote workers through the lens of incorporating them into an existing on-site team and the repeated advice to bring the team together in person to really ensure bonding (um…not an option right now). Granted, this was not an exhaustive search by any means but it was still a bit surprising. Here is what I’ve gathered from my own experience this week with managing a remote team:

  1. Having the right technology is key but it doesn’t have to be exclusive. Our teams are predominantly on Slack, which does help for quick questions and a feeling of cohesiveness. We’re also using MS Teams for virtual meetings since Slack hasn’t always worked for larger meetings. Having options allows for flexibility and the ability to switch platforms as needed.
  2. Set clear expectations for your team about working remotely. We had both an organizational remote work policy and team-specific add-ons. Even little things like letting everyone know your work hours and how to communicate when you will be unavailable for appointments, etc. helps.
  3. Little things make a big difference. We started our first day off with sharing pictures of our work from home spaces. This small effort thought of by someone on my team helped each of us visualize where the team was working and started the day off with a bit of normalization and fun.
  4. One of the best recommendations that I saw during my research was to have virtual video chats on topics not related to work. My team has a standing coffee meet-up when we’re co-located and they are going to keep that going now that we’re all at home. They changed it to a virtual meeting where they will chat and catch-up with coffee brewed at home. These non-work related meet-ups help capture that esprit-de-corps you get in person.
  5. Don’t be afraid of the video meeting. As much as it was slightly embarrassing to be caught off-guard this week in a meeting, that embarrassment was momentary and went away as soon as I saw everyone else in their casual wear. Additionally, the benefit of seeing people while the meeting was occurring far out-weighed any fleeting sense of unease about meeting in my hoodie and make-up free face.
  6. Keep your sense of humor. Humor is one of the self-identified values of my team. A well-place gif in a Slack conversation goes a long way.
  7. Keep the team aware of the work being done as a whole. Whether that’s tracking work in one online location everyone can access, or posting your top 3-5 To Do items in a Slack channel (our practice), it’s easy for remote teams to get isolate and siloed in their work. Helping the team be aware of what everyone is working on (including you as a manager) keeps the sense of a cohesive team while everyone is working apart.
  8. As a manager, be prepared to see some decrease in productivity prior to an increase. Studies have shown an increase in productivity for remote work given the lack of office distractions. What we found in the transitions was a slight decrease, especially for managers, as teams adjusted to the new way of interacting. Without the ability to drop-by a desk or office, there was a noted increase in email and Slack traffic for managers as questions needed an answer. This is likely to level off as the teams settle in.

That’s all I’ve got for now but we’re only a few days into this particularly experiment. I’ll update my recommendations as we go.

Suggested Reads:

  1. https://slack.com/slack-tips/run-daily-standups-or-check-ins
  2. https://knowyourteam.com/blog/2020/03/05/its-time-to-kill-the-daily-stand-up-meeting/
  3. https://business.linkedin.com/talent-solutions/blog/employee-engagement/2019/strategies-companies-use-to-keep-remote-workers-feeling-included
  4. https://centricconsulting.com/blog/coronavirus-how-to-stay-connected-productive-when-transitioning-to-remote-work/
  5. https://www.smartsheet.com/content-center/executive-center/reports-research/unlock-potential-distributed-workforce

Team Building 101

As you probably know if you follow this blog, I’m a big fan of the Harvard Business Review. When I started as a senior manager two years ago, I landed in charge of a team of 30 people. I had never had a direct report in my career and to say I was nervous would have been a severe understatement. My boss agreed to pay for a subscription to Harvard Business Review (HBR) and I have become an ardent follower of them since then. I even have 3 of their podcasts on my phone. I mean, you have to listen to something on the bus commute, right?

When I started trying to figure out what I wanted to do with my career post-fellowship, I was told by a few people that I should consider getting an MBA. After so many years of graduate school, the thought of more school was physically repugnant. I have a terminal degree, why in the world would I go and get another one? Not to mention that I had just finished paying off my student loans and there was no way I was going to acquire more. Now that I’m deep into senior management, I do have much more appreciation for the value of getting a MBA. Since I’m still not in a position to go back to school, and I’m still not convinced I need to, I’ve been piecing together my own MBA of sorts. So far, this has involved the subscription to HBR, an executive mentoring group, some online courses, learning from others and most recently an executive coach. I think it will all add up to me becoming a better manager, hopefully.

One of the areas that I’m actively working as a manager is leading change. The team I manage has been through a lot of change in the past two years, including me coming on board. I could spend a lot of space here detailing the extent and scope of that change but instead I’ll summarize and say that moving from a organization that supports research to one that supports product development and research is a massive paradigm shift. This might seem like a fine distinction but there are broad implications to this change that stretch from redefining best practices and processes to rethinking the team’s identity.

This is where my ad-hoc MBA training has helped me. It did seem daunting to manage not only the change the team had already been through but all the change yet to come. The article that really clicked for me was from HBR and it talked about team motivation. You can read the entire article here: https://hbr.org/2012/04/increase-your-teams-motivation. The point of the article is that people are much more committed to an outcome (by a factor of 5:1) when they get to choose. How I translated this into my team was that they would be much more motivated to change if they could choose what that change looked like. Operationally, I decided on a team retreat to accomplish this.

We held our second annual team retreat a few weeks ago. We’re not exactly pros at this yet but I think just holding the retreats is a victory in and of itself. I say that because it is during these retreats that the whole team has an opportunity to weigh in on all the changes happening in the group, and help to shape the direction of the team and decide what is important for us all to focus on. We started the day deciding on the mission and vision of our team. While my organization has a mission and a vision, I thought it was important for the team to have one as well, especially a vision, that way everyone can be on board with where the team is striving to go.

Next we moved on to team goals and defined our top 6 strategic goals for the year and their priority. In my opinion, that last bit is the key. If you don’t prioritize your goals then everyone on the team doesn’t know how to prioritize their work. I’m all about ruthless prioritization to ensure that everyone, including myself, are putting energy mostly into the tasks that are aligned to the strategy of the team or organization. Prioritization can be a difficult exercise when there is a lot to do or a big change to undertake but it is possible and it is very valuable and worth the effort.

The rest of the retreat involved outlining the tasks involved to complete each goal and then deciding on the first next step in each task. We finished the day spending time with the team we rely on the most, the lab programming team. Cross-functional interactions can be challenging for a number of reasons and having dedicated face-to-face time together to discuss challenges and successes makes a difference. (See here for another HBR article about team retreats: https://hbr.org/2018/09/stop-wasting-money-on-team-building.) My hope is that involving the whole team and our cross-functional partners in the process of shaping the change will result in increased commitment to that change. Only time will tell.

I know that none of this sounds like rocket science, vaccine development science, drug development science or otherwise but it does work and the research has borne that out. This journey is one where I am learning every day and increasing in my confidence as a manager every day. I do feel like I have a good set of tools at my disposal to aid in my success. More importantly, I have an talented and engaged team that has set a vision and is committed to reaching that vision. So maybe I don’t need a MBA after all.

What to do with a vial of blood?

You may have thought from the title of this post that I was going to post some vampire fan fiction. While this wouldn’t be the first time someone thought I was a vampire (that happened years ago collecting blood at night in Haiti for a lymphatic filariasis survey), that’s not really my thing. Last time I talked a bit about the differences between clinical data collected on the Case Report Form (CRF) and non-CRF laboratory data. For today’s post I’m going to walk you through the life-cycle of a specimen and how my team ensures that every specimen possible can be used for testing and subsequent analysis.

The life of a specimen starts at a local lab when the study protocol indicates that a sample is needed for particular testing at that specific visit. The vast majority of this decided ahead of time when the protocol is being finalized. There are specific tests that need to be run at specific time points, either before and/or after treatment or vaccination. For example, at the peak immunogenicity time point post-vaccination, there are specific immunological assays that have to be run to determine if the vaccine has elicited an immune response. For the sake of brevity for this post, I’ll defer discussions on what immunological assays are run for another post. Try not to be too overcome with anticipation.

The tube, or tubes, of blood collected at the clinic are sent along to a local lab to be processed and to have some safety labs run. You’ll remember from a previous post that the type of lab data that I will be opining/educating about is the non-safety lab data for clinical trials. Accompanying the vial(s) of blood is often a written form that includes an inventory of the vials in that shipment and some metadata surrounding the vial, including participant ID, visit number, visit date, specimen type, etc. Now, I want you to pay particular attention to this seemingly minute detail. Because now we have metadata for that specimen entered in the CRF (the lab tech had to check off in the CRF that the specimen was collected and that check produces metadata around the participant ID, specimen type, visit number, data and time collected for the specimen, and all of that is recorded and retained in the clinical database). We also have that metadata on the physical sheet that goes along with the specimen to the processing/local lab. One of the tenets of data management is that if the same information is entered in multiple places, there will likely be errors.

Right now our specimen (i.e. vial of blood) is at the local lab or processing lab to be processed into plasma or serum or cell pellets. Those blood products are aliquoted out and stored either at the local lab or often, at a repository. Now don’t think that all those little tubes are sitting in freezer boxes all nameless. All that metadata that was entered into the CRF and transferred to the lab form is now entered into a Laboratory Information Management System (LIMS). LIMS systems are used to manage all the information around specimens and assay results. If you’re keeping track of our specimen metadata, we now have metadata for the specimens in the CRF, on a physical form and in the LIMS. And every little aliquot (tube) that was derived from the single specimen has that same metadata associated with it.

Now a testing lab is ready to perform testing on a designated aliquot, as outlined in the protocol. The specimens are shipped to the lab with a shipping manifest that contains an inventory of the specimens in the shipment. The specimens’ bar codes are scanned into the receiving lab’s LIMS system and now the fun can begin. For those of you keeping score, the metadata around the specimen now resides in: 1) the CRF, 2) the lab form, 3) the LIMS installation at the processing lab, (4 the LIMS installation at the repository (if one is being used), 5) the LIMS installation at the central or endpoint lab…and a partridge in a pear tree. As you can imagine, having the specimen metadata replicated in all these different places can lead to errors occuring as a consequence of data transfers and being perpetuated through all the downstream locations. This is where my team comes in. We programmatically compare the specimen metadata in the CRF to the metadata in LIMS. The goal is to identify and correct all errors before the specimens are shipped out to the labs peforming the testing. In order to accomplish the daring feat of data management, we have a crack team of programmers supporting us and creating and maintaining the code that does the comparison and spits out reports with errors on it.

Of course, nothing is ever as simple as “generate a report and be done”. The lab data managers on my team work very closely with clinical sites and labs to determine the source of the error and what the definitive source of any given metadata is and to ensure that changes are made in all places where the metadata may be incorrect.

So way all this effort to ensure that a visit date for a specimen is correct? Does that really make a difference in the grand scheme of a whole trial? Channeling our inner consultants, let’s unpack that assumption. Due the complexities of participants that are on PrEP or the fact that HIV vaccines illicit anti-HIV antibodies, HIV diagnosis for clinical trials follows a testing algorithm where specific tests are dictated by the results of previous tests (confirmatory testing) or vist type in the study (i.e. before or after vaccination). This is actually done for HIV testing outside of clinical trials as well. There is a required confirmatory test if you test positive by a rapid test, the same way a woman would go to the doctor for a confirmatory pregnancy test. https://www.cdc.gov/hiv/testing/laboratorytests.html But I digress, as I mentioned the HIV diagnostic testing algorithms can differ by visit. If the wrong algorithm is run on a specimen because the visit number was incorrect in the metadata, it could lead to the wrong result for the participant. That’s obviously not something anyone wants to happen.

While that example is on the extreme end of the spectrum of what ifs, metadata errors for other values can lead to the incorrect testing being performed for other tests, which would lead to incorrect data ending up in the dataset for analysis. If the lab data are being used to evaluate study endpoints, the quality of the lab data is paramount. One of the main goals of my group is to make sure that the lab data used for analysis is as clean as possible and that each data point is a valid data point.

From an ethical standpoint, ensuring that each specimen collected from a participant can be used is critical. Clinical trial participants are a special breed of people who are willing to be part of these studies, sometimes not for immediate benefit to themselves but for the advancement of the science toward a cure. The whole study team is dedicated to guaranteeing that a participant’s involvement in a trial isn’t for naught. Our small contribution to that guarantee to try and make sure that any specimen they give as part of the trial is tested and that data used for analysis and that participants aren’t brought back for additional specimens uneccesarily because no one can find their initial specimen.

I hope that I have convinced you that specimen management is a vital part of the clinical trial process. Please add a comment if you have any questions about the process or why we’ve invested so much time and energy into it.

Up next time…I get back to my “how to run a team” posts with an update of a team retreat we just had.

Lab Data: The Special Snowflake of Clinical Data

We briefly discussed clinical trial data in the last post and the methods used to collect, clean and analyze the data, or at least where you can go find that information. Now we finally get to lab data. Lab data may straightforward, you get results from labs, you add it to the other data from the trial and you’re all set. That is not the case, however, for many trials. Lab data has some nuances to it that make it a bit of a special snowflake.

The decision of where lab data ends up as part of the clinical trial data has to do with 1) what type of lab data it is, 2) how the laboratory and testing structure is set up for the trial, and 3) what type of endpoints the lab data will be supporting.

Let’s start with types of lab data (cause, you know, starting with 1 is usually a good idea). In my organization, lab data is divided up between what we call safety lab data and non-safety lab data, fancy, huh? Safety lab data are the result of testing done on samples to “ensure that patients are not experiencing any untoward toxicities”. (Chuang-Stein C, 1998) These tests are usually ones that you would see at a doctor’s office, liver enzymes, white blood cell counts, etc. In the clinical trials that my organization supports, this lab data is entered into the CRF by testing labs connected with each clinical site or group of clinical sites. Entering safety lab data into the CRFs is industry standard as it keeps all the safety data availalble to be examined regularly to ensure the safety of the participants. The workflow for safety lab data is: a sample is collected from a participant at a visit to the clinical site, that sample is processed and sent to a local lab for testing. Results are sent back to the site, which enters them into the corresponding CRF for that participant and visit. The safety lab data is managed by the Clinical Data Managers (CDMs) for a study and quality checks and processing procedures are the same as the other data collected on the CRFs.

Non-safety lab data consists of lab data that is not generated in support of safety considerations. The spans a whole range of data, including immunogenicity data for vaccine trials and pharmacokinetics (PK) for drug trials. The tests for non-safety lab data can be performed at either local labs or central labs but the key is that the results are not sent back to the site to be entered into the CRF. This is because there are usually no reporting requirements for non-safety lab data. (If a participant has a low white blood cell count, for example, the site would be required to counsel them and perhaps refer them for additional testing). Since the non-safety lab data is not reported onto the CRF, it has to be uploaded to the data management center in some way, cleaned and quality checks performed and errors resolved and then the data are merged with the other clinical data for analysis. The distinction between CRF and non-CRF data is a big one. The CRF data is collected and managed in a Software As a System package (in our case Medidata Rave), that allows for creation of the CRFs, data entry, data cleaning and database creation all in a single, validated and maintained system. Data that comes into a data management center outside of the electronic data capture (EDC) or other CRF system does not have this built-in functionality associated with it, nor the infrastructure around it to make data creation, cleaning and storing relatively easy. Lab data is not the only type of non-CRF data and so these issues span other areas such as questionnaires, SMS or text data or participant dairies. Since I have absolutely no expertise in those areas, I’ll stick to the lab data. Developing the systems to import, process, store and distribute non-CRF data is a big undertaking and I will discuss some of the ways we do this in upcoming posts.

Lab data will be used within the context of a clinical trial, so many organizations opt to embed the lab data management within clinical data management. My organization has opted not to do this, though our processes are aligned with the clinical data management team. Part of the reasons why we have split out the lab data management from the clinical data management has to do with the two other features of lab data management that determine where the lab data ends up in the overall data of a clinical trial.

Workflow of the lab samples and testing may not, on the surface, seem like it would influence what happens with the data downstream but can have a big impact. As I mentioned above, there a few different set-ups for laboratory testing that I’ve seen with clinical trials, and probably endless combinations from there. One scenario is to have the samples drawn at the clinic and sent for processing to a local lab. That same local lab would then perform the safety lab tests, diagnostic testing and store additional aliquots of each sample. Those additional aliquots would then be sent to central or speciality labs for more advanced testing (i.e. immunogenicity or PK testing). In this scenario, the diagnostic test results would be send back to the clinic along with the safety lab results and reported on the CRF. In order to ensure quality and consistency across multiple labs, a selection of samples could be sent to a central lab to verify diagnostic status.

In another scenario, the samples are collected at the clinic, then sent to a local lab for processing and safety lab testing and then aliquots sent to a central repository. The aliquots of the samples would then be sent out to central or specialty labs for immunogenicity, PK or other specialized testing. Additionally, diagnostic testing can also be done by central labs as opposed to local labs.

So what are the implications of these different workflows. In the first workflow, all the safety results and diagnostic results would be reported on the CRF. Any specialized testing would have to be reported through another mechanism other than the CRF but done in a way as to make the results data compatible with the other data from the trial. This is where my team comes in. We receive specialized testing data, process it, resolve erros and create datasets for analysis. The same is true for the second workflow, with specialized lab results having to be sent to the data management center via a secure and consistent pipeline apart from the clinical data stream and my team receiving, and processing the data. A centralized diagnostic lab would have to report the data back to the sites to enter into the CRF in order to be able to give those results to participants. However, in the case of the diagnostic data that we handle from a centralized diagnostic lab, that data comes through my team first, where we perform quality checks and ensure that the correct testing has been done on the correct samples. So where the lab data is coming from influences how it becomes part of the overall data for a trial and who handles it along the way.

Up until now, the reasons why lab data can be unique have to do with the type of lab data being processed and the route by which the data came to the data management center. Taking these two characteristics together, you could still make the case that the lab data could all be reported on the CRF and handled by CDMs, which I stated earlier is how many organizations operate. The final consideration in this argument is what analyses is the lab data supporting (i.e. what type of endpoints will use lab data in the analysis). An endpoint for a clinical trial is defined as “a measurement determined by a trial objective that is evaluated in each study subject”. (Self, SG, 2004). Essentially, it’s what you are measuring your intervention against. Most endpoints for clinical trials are safety and efficacy focused and are called “clinical endpoints”, essentially, is the intervention safe and does it work in stopping or preventing disease. The key word there is “disease”. Aside from the safety measures we discussed above, the goal of a clinical trial is to ensure that an intervention works and in the world that I am in, “works” equals prevents HIV. So the endpoint of a clinical trial would be, does this intervention prevent HIV? That is over-simplifying by a rather big extent. There are different phases of clinical trials that have different purposes, the first of which (Phase I) is just to ensure that a product is safe in humans, and if it’s a vaccine that it elicits an immune response. But for now, “does it prevent HIV” is good. From the lab data perspective, traditional clinical endpoints are relatively easy. Safety data and diagnostic data are reported on the CRF so there is little to do that is different from any other data in the trial.

But what do you do if you’re researching a disease like HIV, or cancer, where the clinical endpoint can take some time to appear? Trials are long enough as it is and waiting a longer time until onset of disease can mean more time until a product is available. What if you are trying to improve on an already existing intervention? The existence of an already-licensed vaccine, for example, may mean that the incidence of that disease in the general population has been reduced such that a huge trial would be needed to get enough infected individuals to have a robust statistical analysis. These considerations, and others, have led researchers to adopt what are called “surrogate endpoints”. A surrogate endpoint is a “biomarker that can substitute for a clinically meaningful endpoint for the purpose of comparing specific interventions”. (Self, SG, 2004). In the vaccine field, these can be correlates of protective immunity or “biomarkers associated with the level of protection from infection or diseaes due to vaccination.” (Self, SG, 2004). The laboratory data that would support a surrogate endpoint or correlate of protective immunity would be the immunogenicity data that I refered to above, which potentially is not part of the CRF. Why does this matter? Data used to support primary or secondary endpoints in clinical trials is the data that is under the most scrutiny from a regulatory perspective. The prime objectives of the study are the ones that regulators are interested in and then there are always additional analyses done by researchers for more scientific reasons.

Ideally, you would want all the laboratories involved in a clinical trial to report results in such a way that the data is entered into the CRF. However, the logistics of this can be challenging, especially when surrogate endpoints are not already defind and there is a large amount of research going into new methodologies and laboratory tests to define those endpoints, which means lots of labs reporting data. This is where I would argue that splitting out the lab data management into its own team is important. While it would seem that having the CDMs handle all the lab data would be advantageous since they are familiar with data handling and having one team handle all the data is good from a consistency standpoint, I think there are more advantages with the split team set-up, and not just because I manage such a team. Having lab data separated out into its own team allows the individuals on the team to become highly specialized in handling a type of data that will not have the standardization or harmonization of the clinical data. For clinical data, there is the CDISC system, which provides a framework to harmonize data structures from data collection through dataset creation and into analysis. This same system does not yet exist for specialized laboratory data. There are lab data components within certain portions of the CDISC system, but it lacks the same infrastructure to assure standardization from data collection to analysis. Therefore, lab data arrives to the data management center in every sort of shape and format and we are responsible for putting it into a format that will fit the statisticians’ needs for analysis and fit into the CDISC structure used by the other clinical data. This is not a cookie cutter type of activity and having individuals that are trained on laboratory assays, in addition to data management, provides a more quality output, at least in my opinion. Also, having a team that is trained in the laboratory assays being used means that communication with the laboratories is smoother. My team can speak the same “language” as the labs and can help with data issues since they understand how the data was generated. Data management involves a lot of communication and cooperation to resolve issues with the data and having a specialized team helps. It also allows me to elevate the visibility of lab data within the organization. With surrogate endpoints becoming more and more frequent in the clinical trials arena, having lab data occupy the same strategic importance within an organization is advantageous from an operations and business perspective.

Whether or not lab data management is done by a separate team or the same team as the clinical data management, there are considerations that make lab data a bit of a special snowflake. The lack of one system to manage the data all the way through the trial (at least in some cases), the variability of the data and the lack of standars for non-safety lab data make this a dynamic and challenging field to work in. In upcoming posts, I will go into how my team manages the challenges.

The Art of Data Perfectionism

The title of this post includes the word perfectionism. The reasons why are elucidated below. Between when I started drafting this post and now I had some thoughts that I wanted to add as a preamble of sorts. I keep coming back to why I don’t post on the blog regularly. I could of course blame the fact that I work most nights after dinner, have a family, a social life and am currently taking a online course in Jira. But, as I’ve said before, we make time for the things in life that we’re passionate about and want to do. I am really passionate about this blog, so what’s the hold up? This might be the one area that where perfectionism is holding me back.

I’m not a perfectionist by trait. I’ve never used that as the answer to the, “Tell us a weakness” question on interviews. I firmly believe in ruthless prioritization and the 80/20 rule. Also, having been a research scientist I tend toward iterative creation, design, etc. Getting trained as a Scum Master was almost like second nature because of course you would design and produce iteratively, only putting into each development cycle what was really needed. So it’s a hard feeling to reconcile now, this perfectionism with the blog. It’s not like I have tons or even tens of followers so the fear of messing up should be low. Except that it’s not. This goes back to a topic I wrote about in another blog post (Identity Crisis). Having gone through the PhD process in the US and spending the majority of my career thus far in research science, I have this ingrained and ridiculous notion that only people who have studied something for their whole lives (or non-stop for 4 years) have the authority to speak about it. The culture of “elder respect” in research science is strong. I just haven’t gotten my head around the idea that not only am I qualified to talk about a range of topics due to my experience to date but that I am qualified to talk about clinical trials and data since I live that work day-in and day-out. I’m currently reading a book called, “Playing Big” by Tara Mohr which is a study on why women have a harder time “playing big”, so to speak, and what to do about it. I’ll let you know how it goes but hopefully one consequence of the process will be me getting my voice out there more.

Of course, the stakes for me with this blog are pretty low. The only real risk is a reputational one if I something wrong. In the world of clinical trials, the risks for inaccurate data can be much higher. (See what I did there, slick, huh?). The individuals who are on the front lines of keeping data quality high in clinical trials are clinical data managers and clinical data coordinators. These individuals are often certified and are, out of necessity, perfectionists. Every little detail matters when you’re setting and managing the data from a clinical trial, from the initial data entry forms to the dataset creation at the end of the trial and locking the database.

Clinical data management is the “collection, integration and validation of clinical trial data”. Done right, clinical data management can reduce the time to market for important health interventions by ensuring the generation and retention of high-quality, reliable and statistically sound data. (Krishnankutty, 2012). High-quality means that the data conforms to protocol specifications and that it contains little to no errors or missing data.

The process starts with the development of the protocol. For the uninitiated, the protocol is a document (often very lengthy) that describes how the trial will be conducted, and ensures the safety of the patients and the integrity of the data. Depending on the organization, clinical data managers are often involved at this early stage. From there, the clinical data managers are integral to setting up the study and how the data will be collected, including what checks will be done during the course of the trial to make sure that the quality and integrity of the data remains intact.

While the trial is ongoing, clinical data managers use a variety of tools to track the data, try and solve discrepancies in the data or find missing data and help to ensure patient safety. If this sounds like individuals have too much control over the data, rest assured that there are pages of regulations that govern operations of clinical trials and the data associated with them and clinical data managers are often at the front line of meeting those regulations.

So with all this to juggle and the results of a trial hanging in the balance, how do clinical data managers do their job. Having worked with them for over a year, i can tell you that they are very committed and very detail-oriented people. They also have fairly clear guidelines in the regulations for how the data should look, or how to ensure data quality (i.e. audit trails, etc). Additionally, there are several professional societies that offer certification, ongoing education and a community of practice. One such organization, and a good place to find information, is the Society of Clinical Data Management (SCDM). Www.scdm.org.

So why this whole post about first, my insecurities, and second, the briefest of overviews of clinical data management? With this post, I’m straddling the dual purposes of this blog; 1) To share my experiences as they happen and as I grow in my career; 2) To highlight the lab data management portion of clinical trials. This first post is to introduce the concept of data management as it pertains to clinical trials in the traditional sense. As I post more (which I will, I promise), I will contrast this to how lab data is viewed and managed in the context of clinical trials and hopefully how those practices can assist in non-clinical research as well.

What is Lab Data Management Anyway?

I thought that for this post, I would introduce the new subject on the blog, lab data management. The idea is that in addition to providing witty reflection on how I got to where I am in my career, I would talk a little more about what that career looks like.

Before I can get to my career and what I actually do (still trying to figure that one out), I should provide some background. Lab data management is a subset of clinical data management so I’ll start there. I am going to use the Wikipedia definition since I got rid of my encyclopedia set decades ago. Clinical data management is a set of processes and procedures that “ensure collection, integration and of data at appropriate quality and cost”. The goal of clinical data management is to generate high-quality, reliable and statistically sound data to ensure that conclusions drawn from research are well-supported by the data. So, no pressure…right?

In many clinical trial settings, both in-house and contracted out (CROs), lab data management is conducted by clinical data managers along with the management of all the other clinical data. There are only a few institutions that I’m aware of that separate the laboratory data. I should clarify that when I’m talking about lab data, I’m not talking about the safety labs done to monitor the participants during the course of the trial (white blood cell counts, liver enzyme tests, etc). Those are monitored along with the other clinical data, at least in our organization. Lab data for my team consists of the endpoint data (HIV diagnostic data), pharmacokinetic (PK) data for drug trials and a whole host of immunology assays that are being done to assess the immune response to vaccines.

So what do we do with the lab data?  I’m so glad you asked.  Lab data management for us can be grouped into two broad categories, specimen monitoring/specimen data quality control and assay data processing.  Specimen monitoring and specimen data quality control are essentially the same thing.  For the purposes of this post, I’ll call it specimen monitoring.  In all clinical trials, participants have specimens taken.  It’s usually blood draws but it can also include tissue biopsies, etc.  The metadata around these specimens can end up being entered in two different data streams, the clinical data stream (i.e. Case Report Form filled out when a participant comes in for a visit), and a Lab Information Management System (LIMS), which is filled out when the specimen is processed in the lab.  In order for the specimen to be used for HIV diagnostic testing or immunological testing, the metadata has to match in both places.  Let’s take the example of the HIV diagnostic testing.  There are algorithms for testing in HIV to determine not only if someone is infected, but if it is an acute or chronic infection.  HIV testing algorithms are not the same for every study.  If you are performing a HIV vaccine trial where the whole point is to elicit antibodies against HIV, you will have to have a series of tests to determine if the antibody responses that show up positive on a diagnostic test are vaccine-elicited or from actual HIV infection.  If you are testing a HIV prevention intervention, the testing algorithm will be different.  So if the metadata for a specimen at the time of draw says that this blood tube is from visit 4 from protocol 001, then the diagnostic lab knows what testing algorithm to run.  If, somewhere in the process of sending the tube to the lab and the transfer of information from the clinical database, to a specimen label or lab requisition form, to the LIMS, the metadata got changed to visit 4 from protocol 002, then the testing algorithm will be different.  This would render any data from that testing invalid. 

One whole scope of work for my team is to ensure that the metadata from a specimen remains correct throughout the course of the study, no matter what data stream that specimen appears in. We accomplish this by programmatically comparing the different data streams each day and issuing QCs when the data doesn’t match.  We then work with the labs and clinics to find the reason for the data discrepancy, the source documentation to determine the real value and to correct the QC.  This ensures that as many specimens as possible can then be used for testing.  Participants trust that when they donate blood or tissue, that it will be put to good use and we help to ensure that it will be.

The second large scope of work for the team is assay processing.  After clinical specimens have been processed and sent to labs for testing, we receive that assay data back into our group.  We again check to make sure the specimen metadata is clean and we also do additional quality checks to evaluate the data for format consistency, logic (if there is supposed to be a numeric value, we check to make sure the values are numeric), and some range checks and other assay specific checks.  This part of our work is important because not only do we want all specimens to be able to be used for testing, we want all the lab testing data to be used in the statistical analyses.  We provide consistently formatted and clean datasets to the statisticians for their analysis. 

In short, lab data management in SCHARP is a group dedicated to preserving high quality laboratory data for analysis in clinical trials by safeguarding the metadata around clinical specimens and providing consistent and clean laboratory datasets for analysis.  If you’re interested, I can go into more detail about how we do this in subsequent posts.I will definitely be doing more posts about why it’s important to think about data management, even in a research setting and discussing some methods and best practices for how to start implementing lab data management, regardless of the setting.

Dear Lexi

Since I started at the Gates Foundation, I’ve had a fairly steady stream of people asking for advice on how to get into public health or how to move out of the lab. I remember doing easily 50 or 60 informational interviews when I was in graduate school and feeling like I didn’t get a lot of concrete advice, which I now know isn’t really the point of informational interviews, but regardless, I felt a little defeated. I’ve gotten better at collecting advice over the years so here is my advice column so to speak on moving out of the lab or changing career fields.

  1. Develop an elevator pitch – If you are going to do informational interviews or ask anyone for help in your job search then you should be ready to talk briefly about yourself and where you are looking to go next. If you can compare what you want with a position the person is already familiar with, even better. Help them help you.
  2. Do informational interviews – Informational interviews are a great way to find out more about a career that you might be interested in. People love to talk about themselves so I’ve gotten great responses from cold emails to people in my network or in a contact’s network. The key thing with informational interviews is to have a good list of questions for your interviewee and remember the point is to find out about that job and what it’s like and what it takes to get there.
  3. Consider a human voiced resume – This type of resume has been big in the business world for a few years now but oddly hasn’t translated into the sciences. Human voiced resumes are more narrative, which allows for a more thorough explanation of your skills. They are great for career transitions in science since they are still somewhat novel and will stand out and the narrative style means you can provide more context around your skill sets. More information here: https://www.forbes.com/sites/lizryan/2016/01/15/grab-your-hiring-managers-attention-with-your-human-voiced-resume/#1c8d6b2b6738
  4. Write a cover letter – Please, please, please. I have hired three people so far and the resumes without cover letters are just question marks to me, no information about why the individual wants the job or why they think they would be a good candidate. This is especially important if you are switching career tracks because your resume may not immediately reflect why you would be good in the position. Another version is called a pain letter and is equally effective.
  5. Highlight your soft skills – As scientists we get used to sticking our publications and assays we’ve developed, etc. If you are looking to move off the bench then you will have to translate your research experiences into some “soft” skills such as project management, negotiation, team leadership and even budgeting. In doing so, make sure to use active verbs like “implement”, “plan”, “execute”.
  6. Don’t be afraid of fellowships – No one bats an eye about doing a post-doc fellowship but that doesn’t translate as much to moving away from the bench or switching careers. Fellowships can be a great way to gain skills outside of your existing skill set and broaden your network so you can land the perfect next job.
  7. Apply to everything – You may not think you have the qualifications for a position but apply anyway. Make the case for why your unique skills can get the job done. Maybe the hiring manager already tried the usual candidates and it didn’t work out. You never know. This goes doubly so for women. Women are much less likely to apply for jobs if they don’t meet every one of the criteria. Just apply if you really want the job!
  8. Volunteer for opportunities that are a little out of the ordinary – During my graduate school research, I also volunteered for a start-up science policy organization. That opportunity gave me the ability to add things to my resume that were different from the “normal” grad student resume. Always look for opportunities to strengthen your profile.
  9. In interviews be clear about not just what value you bring to the organization but also about how you would take the position and make it something they didn’t envision. – In so many interviews I’ve heard candidate regurgitate their resume and that’s fine but what I’m really interested in is how will you take this position and make it your own?
  10. Do your research! – I know everyone tells candidates this before an interview but I was astounded by how many candidates I interviewed recently who clearly hadn’t done their research on our organization. Make sure you know who is interviewing you (LinkedIn is good for this), what the organization’s goals are, their financial situation, their key partners and then bring good questions to the interview. You can even find good interview questions on-line so no excuses.

Wow, my last post was in May.  I suppose enough time has passed for this to be classified as the re-re-relaunch of the blog.  Here’s to hoping that the third time is the charm.  This is also a re-re-relaunch because I think I will introduce a slight change in scope for this blog.     I can’t believe so much time has passed since my last post.  Of course I always start these sorts of things with the best intentions and then, well, life gets in the way.  At least, that’s what they say right? I was listening to my newest obsession in podcasts. (For those of you who don’t know, I’m totally obsessed with podcasts and listen to them all the time). Anyway, my newest binge listen is Zig Zag (https://zigzagpod.com/) a podcast about women, entrepreneurship, and technology. I’m also making a plea hear for fellow listeners since I need, need, need people to talk to about this podcast since I love it so much. A recent episode got me inspired enough to start writing again. The episode was about the hype cycle. The hype cycle was developed Jackie Fenn at Gartner and it tracks the “cycle” of a new technology from inception through to adoption. This concept applies to new ideas as well. As I listened to the podcast and I listened to the host apply this cycle to everyday challenges and ideas I had a lightbulb moment (or epiphany for all you fancy people out there). Sure, life got complicated and harder than expected recently (more below) but something bigger was at foot with this blog. I was in the TROUGH OF DESPAIR (insert dramatic voice and music here). The trough of despair can be defined as when interest wanes due to failed experiments or implementation. At this point, stakeholders can drop-out and survival only happens if the product improves to the satistifaction of early adopters. So, here I am coming out (hopefully) of the trough of despair with an improved product and hoping that you (my early adopters) will approve of it.

But before I unveil the new and improved (at least content-wise), a bit of background is in order so please bear with me. Back at the beginning of this year, a series of unfortunate/fortunate (yet to be determined) events unfolded such that I ended up leading two teams at work.  Initially I stepped in to be the interim head of another team in the organization.  This team was newly elevated in a re-organization and now needed a senior manager and since those were in short supply at the time, we divided and conquered to cover the new teams that were created.  Since the newly elevated team was lab data management, it made sense for me to take it over with my background in the lab.  The team had been through a lot and was wary of the reorganization and had seen quite a bit of shifting leadership in the past few years.

So now I had two teams, one that needed someone to lean in and provide guidance through change and a clear direction and vision for the team moving forward and one that was expecting me to lead them through the execution of the goals we had set forth at the beginning of the year and there was of course the additional expectations of our partners who now interacted with me in two capacities.   This isn’t said to elicit sympathy or pity but just to say that I found myself getting a crash course in management and setting priorities in order to make sure I stayed on top of the most important things.

As I was treading water, trying to keep afloat learning two new teams, I found a moment to stop and think (yes, just one).  Was this actually an opportunity in disguise?  Could I even think about another opportunity in an organization that I joined only 5 months earlier? Here I was having just gone through a big job transition and I was contemplating another one.  How foolish could one person possibly be?  I was about to find out.

My boss was the one who initially floated the idea of me switching to lead the lab data management team.  I resisted at first and then I started thinking…a dangerous habit of mine…what if?  What if I could draw on all my skills, including my background in the lab?  What if I could take a team that needs direction and guidance and build something truly unique and special?  What if this is a huge mistake?  As the head of a statistical unit, I would be a known quantity.  I could go anywhere from here.  Companies are always looking for leadership for statistical groups.  I would be on a defined career progression for once.  If I took on this new team, there would be no such assurances.  There are very few other teams like it around.  I would have to, again, pave my own way and define not only what this position would look like for me but I would be redefining and building a whole team.   So I was faced with the decision to stay where I was (leading a great team and with a fairly clear path in front of me) or jumping into the unknown. What do you think I did? Of course, I jumped. Part of me really wishes I could be comfortable with the straight and narrow but I’m always one to be enticed by the road less taken.

So here I am, almost a year after taking on an interim team and 7 months after officially switching over to being the Senior Manager of Lab Data Management. I still have some residual duties on the statisical team that I am hoping to be done with this Spring so I can fully focus on one area in the organization and I really think I made the right choice. One unexpected consequence of this move has been that I’m now rethinking my career trajectory. Because, you know, I need more change. My new group is responsible for monitoring and maintaining the quality of the specimen data and assay data for the clinical trials. This taps into my deep-routed love of all things quality related and also has got me thinking about data quality in research in general, especially with data science being the hot new “it” field. Could I potentially become a Chief Data Office instead of a CEO? Was data my new passion or a resurrected passion? What would I have to do to gain some skills to match my new team and my new vision of my career path? That’s what I’m going to explore in the blog more now. How to get out of the lab and into another industry and how to keep thriving and learning and shifting to find what you really love. Also, I will be making informed and educated pleas and pushes for more quality control in research data, along with tips for how to do that. It will be probably my 5th or so re-invention but as I approach my 40th year, there’s nothing I want less than to be stagnant.

The Trough of Disillusionment