Back to Basics

One of the interesting side-effects of having a done a PhD, aside from some unwanted flashbacks whenever I come within view of a flow cytometer and the uncanny ability to know where the mouse facility is in any research facility based on smell alone, is my iterative approach to pretty much everything. Having spent a good portion of my career in research science, I naturally approach everything as an experiment, gather the data, analyze the data, re-adjust my hypothesis and approach and run another experiment. It’s become so ingrained into my problem-solving methodology that I don’t realize I’m doing it most of the time. How that translates into the managerial work I’m doing now is that it appears that I’m in a constant state of process improvement. This is basically true, I’m constantly running little experiments, trying to solve a bigger issue or achieve a larger goal. I’m sure this drives some people on my team nuts as they are constantly hearing, “can we try this another way”, and I’m also sure that there’s little chance that I will fundamentally change this behavior anytime soon. My task is to make sure that the end goal is always in sight and that the team knows why we are iterating and the results of the iterations, etc. But I think that’s a topic for another post.

My objective in leading off with an explanation of my general thought process is because that is how I came to a new initiative with my team that is very relevant to what I’ve been discussing here. I recently obtained the DAMA Body of Knowledge (DMBOK) and am casually studying it with an eye toward certification. The Data Management Association International (DAMA-I) has been around since the 1980s and is comprised of data management professionals from around the world. They offer certification as well as annual conferences. Anyway, the excessively large book began with a discussion on the definition of basic terms. At first I thought this was a little too basic until I delved deeper into the reading. It wasn’t that the definitions of basic concepts like “data” and “metadata” were so profound that I thought they encompassed all possible scenarios for the use of those words or anything transcendental like that. While I was reading through the rather dry text, it occurred to me that my team hadn’t set these definitions for us. We hadn’t laid the foundation for our data management practice by collectively agreeing on what we meant by “data”, “metadata”, “data management” and “data management principles”.

I decided to start an Intro to Data Management information sharing series with the team. It’s a once a month session in our team meeting that will go through fundamental data management components, starting with the basics. I call it information sharing rather than “training” or “education” because I truly want it to be a discourse on best practices and a re-imaging what these dry, textbook definitions and theories mean for us in our practice of lab data management. This is probably iteration #3 of our team meeting and I know we’re getting closer to a framework that is meeting the team’s needs.

Sidebar: This brings me to another iteration of practice that I introduced recently that is gaining some traction. I came across an article about silent meetings. (See resources here: https://hbr.org/2019/06/the-case-for-more-silence-in-meetings and here: https://www.businessinsider.com/better-idea-silent-meetings-work-2019-1, just to name a few.) The concept struck me at once because of the mix of participation and non-participation that happens in most of the meetings I facilitate. OK, let’s be honest, mostly non-participation. Having come from an organization where participation in meetings was considered an essential skill and the CEO valued “a good dust-up”, this culture of passively listening and head nodding was totally foreign to me. I knew people had good insights and ideas, they just weren’t coming out in the meetings. When I saw an article about silent meetings in a daily email digest I get, it was a definitely a light bulb moment. You can read the links for details but the benefits of this type of meeting are that those who tend to be quiet in meetings can offer their ideas and you avoid the bias of everyone echoing the opinion or thought of the first person who speaks. So far the results have been good in the meetings and the feedback from the team has been overwhelming positive.

Now, after that rather large squirrel, let’s get back to data. So, defining the term data may seem like the most obvious thing in the world and not worth the time it takes to write data. Let me offer this anecdote to illustrate why the time might be worth it. Where I currently work, we get a large number of data requests. We have formal procedures in place now but not too far in the past, email requests would come in from a researcher asking for “the data”. We would email back and say, “OK, here is the raw data you asked for.” Nope, that was not what the requester wanted. Attempt #2: “Here is the processed, standardized data set.” Nope, that wasn’t it either. Attempt #3: “Here is the exact analysis data set used with the metadata attached.” Nope, that wasn’t it either. Attempt #4: “Here are the results (interpreted data)”. That was it!! Finally!! Now of course there are a number of issues with this anecdote including communication, intake processes, triaging of requests, etc. What I would like you to take away from it though is the importance of having a shared vocabulary and how much time it saves in redone work.

Defining terms such as data, metadata, data management, etc is extremely important no matter what kind of work your organization does. Obviously, for a data management and analysis organization, this is absolutely critical. But, I would argue that this is critical for all organizations. If you do not define what “data” is for your organization, all sorts of assumptions can be made about how data moves through the organization, how it is secured, how it is managed, how it is retained. The DMBOK defines data as an asset, “an economic resource that can be owned or controlled and that holds or produces value.” As such, it should be defined and managed. The organization I work for has done an immense amount of work doing just this, defining data, and metadata and data management. It had not trickled down into my team yet. We hadn’t taken those definitions and translated them into ones that were specific to the lab data we manage. Using the silent meeting structure, we defined what data meant to us in the context of the lab data we manage, what metadata was, how data management was defined and what data management principles aligned with how we (and the whole organization) work. I got feedback after the meeting that both the format and content were well-received.

Now is where this post comes all together. Getting back to basics means:

  1. Defining what “data” and metadata” mean to your organization. What do you include in your definition of data and metadata?
    1. Be comprehensive here. Assuming that data has to numerical or electronic can be misleading. There is a lot of “data” that we would have included in the definition not so long ago, names, addresses, the number of times someone visits a website, personal preferences for clothing, etc.
  2. Define what data management means to your organization and decide on your data management principles.
    1. What are the key components of how the organization should manage data?
  3. Outline your data management principles
    1. These should guide the data management practice
    2. Reflect on how your data assets are used or can be used to reach your organization’s goals
    3. You don’t have to do this from scratch. There are lots of good resources out there to use as a starting point.

You can use a number of different types of meetings, asynchronous communication, etc to get at these definitions (hey, why not through a silent meeting in there?). Laying out this foundation for data in your organization and making sure the whole organization (here I mean group, team, whole company, whatever you have influence over) is clear about this foundation, ensures a solid ground on which to build any data project.

So, care to run a little experiment with me? Can you define data for your organization? Do you have clear and consistent data management principles? If not, join me in enacting a bit of change, one little iterative experiment at a time, starting with the basics.

Welcome to the (Virtual) Team

I hope everyone has been adapting successfully to their new work environment. I’m going to go ahead and assume you’re all working from home or some iteration of non-standard work practices at the moment. We’re almost a month into fully remote work and narrowing down our best-practices and preferred technologies. Zoom has featured heavily even though MS Teams is freely available. Zoom has the added benefit of being able to see everyone at once in a meeting of over 5 people. Apparently Microsoft is working on this feature for MS Teams but has yet to implement that. (Hurry up Microsoft!) I also came across this stellar article recently: https://careynieuwhof.com/my-top-7-rules-for-leading-a-digital-team/.

However, as normal as this has started to feel, I have a new challenge coming soon, onboarding a new employee remotely. There are several unique challenges that I’m encountering with this new process. The first one being, how do I ensure this employee gets all the appropriate paperwork signed and has access to the needed equipment?

Fortunately, the organization I work for has been ahead of the game in preparing for “shelter-in-place”. Non-essential employees were sent to work from home before official orders came in from the government and support services such as IT ramped up sufficiently to make sure that systems such as the VPN didn’t crash and burn. Similarly, HR has stepped up and is doing onboarding remotely with ID cards, etc being issued later. Which is fine since we’re not supposed to be on campus anyway. OK, one thing down, numerous more to go. We’ve even figured out a way to get the employee a re-imaged and ready-to-go laptop. Whoohoo!

Working in the regulated environment of clinical trials means there is always additional paperwork that needs to be filled out prior to anyone starting work. In the case with any new employee, there are signatures required on documents for the Quality Department before training on SOPs (Standard Operating Procedures) can be done and employees must be trained on the SOPs before they can be trained on the work. One would think that in an organization without a validated esignature system, this would present a rather significant hurdle. Again, smarter heads prevailed and the Quality team came up with workarounds both for signatures and for the essential paperwork needed before SOP training can begin. So, now we’ve knocked two items off the list.

Now come the really tricky parts. 1) How do you train someone virtually? I am sure that there are teams and organizations where this is second-nature but that is definitely not our experience. The first part of training will work out well. I’ve developed a reading list for all new employees that details required reading in order of priority. For my team, it’s all the SOPs first, then introductory scientific articles to HIV in general and to the research being done by our partners, then protocols and related documentation for the studies the individual will be working on. Next, any new employee can move on to a training matrix that includes both online training modules and additional reading (such as team Work Practice Guidelines covering everything from days off to Slack and Jira usage) to introductory meetings with admin staff and other teams. Obviously those meetings will all be virtual now.

When it comes down to training on the actual data management, that’s where we’re going to have to be a bit more creative. We’ve developed documentation on how to review specimen data, how to generate reports, etc but I’ve found that the best training is shadowing existing staff and getting to ask questions in real-time as the tasks are being performed. No SOP or instruction document can adequately describe the intricacies of data management or account for all the possible scenarios of why a data discrepancy was generated. When so much of the work involves problem solving, how is do you teach that virtually? The best solution I have at the moment is to try shadowing virtually. I think that the technology is up to the challenge with screen sharing and virtual whiteboarding available, so I’ll put a check in that box for now.

Speaking of all these virtual meetings, this brings me to the next challenge, 2) How do you integrate someone into a team virtually? As well as the team has been doing with virtual coffee chats and happy hours and Ted talk watching, they all had an existing relationship prior to going virtual. It’s yet to be seen how someone will be integrated into the group in a purely on-line setting. Of course I’ll try a few Zoom meeting ice breakers but I think this one is very much TBD challenge. No check marks here. One strategy I’ve been pondering for the team anyway is called silent meetings. Since I have a range of personalities on the team in terms of those willing to speak-up in meetings and those who are less so, I gravitated to this concept when it popped up in an email last week. The basic premise is that you email a question ahead of time and gather responses. The facilitator then passes out the responses (without names) at the beginning of the meeting and everyone takes time to read them. The team then identifies main ideas in the responses and the facilitator writes them on a whiteboard and the team stars the ones they identify with. This is used to focus the subsequent discussion. Here’s a link to the article: https://slab.com/blog/silent-meetings/. I’m not sure if this will help exactly with team integration but if the new member is shy about speaking up at first, this could be beneficial.

The last challenge I’ve identified so far is 3) how to assess performance and the retention of training? Now this might seem like a problem that is common to the whole team but with the other members, I’ve had a bit of time (sometimes quite a bit of time) to know and understand how they communicate, how to tell when something isn’t going well, what their usual sticking points are on the work, etc. With someone new, this is much harder. I won’t be able to tell when they are saying everything is fine when it’s not or feeling dissatisfied with the work, at least not as easily. I’m in the habit of walking around and checking in with the team when we’re in the office and the often gives me clues as to what they are doing as well as how they are doing and allows for more spontaneous updates and conversation. It’s going to be a bigger struggle building that trust and relationship with someone new while we are all working remotely. This is an aspect of the whole situation that is going to require some more research on my part. Definitely no check marks here as I don’t have anything even rattling around in my head yet about how to tackle this one. I’m open to any and all suggestions.

So, to recap, when onboarding an employee virtually:

  1. Work closely with HR, IT, etc. Don’t be afraid to ask for help, propose creative solutions and be open to thinking differently about how this all can happen. Everyone wants to make this a smooth experience for a new employee and are often willing to help (example: our IT department is printing out some material for my employee since they didn’t have a printer at home).
  2. I didn’t actually mention this above but it’s a running theme for this whole “virtual teams” situation. Communication is key. Even though I’m not 100% sure how this onboarding is going to look, I’ve been in contact with the employee and letting this person know as much as I know about what the first day will look like. Information is always appreciated, even if it’s incomplete information.
  3. Having standard training material already developed and ready to go is a life saver. It was so nice to have one less thing to worry about and to know that this new employee was going to get the same information that the last 3 new employees got and that it was going to contain everything they needed to get started. Additionally, having a template of key meetings to set-up made that process go a lot more smoothly as well.
  4. Integrating a new employee into an existing team virtually is likely going to be a bit tricky. I’m going to have to pay special attention to make sure this new person feels included. I’ll probably try some ice breakers and will definitely try out silent meetings to even the playing field in meetings for everyone on the team.
  5. Assessing performance and training retention is truly an unknown for me at the moment. Any and all suggestions welcome.

Team Building 101

As you probably know if you follow this blog, I’m a big fan of the Harvard Business Review. When I started as a senior manager two years ago, I landed in charge of a team of 30 people. I had never had a direct report in my career and to say I was nervous would have been a severe understatement. My boss agreed to pay for a subscription to Harvard Business Review (HBR) and I have become an ardent follower of them since then. I even have 3 of their podcasts on my phone. I mean, you have to listen to something on the bus commute, right?

When I started trying to figure out what I wanted to do with my career post-fellowship, I was told by a few people that I should consider getting an MBA. After so many years of graduate school, the thought of more school was physically repugnant. I have a terminal degree, why in the world would I go and get another one? Not to mention that I had just finished paying off my student loans and there was no way I was going to acquire more. Now that I’m deep into senior management, I do have much more appreciation for the value of getting a MBA. Since I’m still not in a position to go back to school, and I’m still not convinced I need to, I’ve been piecing together my own MBA of sorts. So far, this has involved the subscription to HBR, an executive mentoring group, some online courses, learning from others and most recently an executive coach. I think it will all add up to me becoming a better manager, hopefully.

One of the areas that I’m actively working as a manager is leading change. The team I manage has been through a lot of change in the past two years, including me coming on board. I could spend a lot of space here detailing the extent and scope of that change but instead I’ll summarize and say that moving from a organization that supports research to one that supports product development and research is a massive paradigm shift. This might seem like a fine distinction but there are broad implications to this change that stretch from redefining best practices and processes to rethinking the team’s identity.

This is where my ad-hoc MBA training has helped me. It did seem daunting to manage not only the change the team had already been through but all the change yet to come. The article that really clicked for me was from HBR and it talked about team motivation. You can read the entire article here: https://hbr.org/2012/04/increase-your-teams-motivation. The point of the article is that people are much more committed to an outcome (by a factor of 5:1) when they get to choose. How I translated this into my team was that they would be much more motivated to change if they could choose what that change looked like. Operationally, I decided on a team retreat to accomplish this.

We held our second annual team retreat a few weeks ago. We’re not exactly pros at this yet but I think just holding the retreats is a victory in and of itself. I say that because it is during these retreats that the whole team has an opportunity to weigh in on all the changes happening in the group, and help to shape the direction of the team and decide what is important for us all to focus on. We started the day deciding on the mission and vision of our team. While my organization has a mission and a vision, I thought it was important for the team to have one as well, especially a vision, that way everyone can be on board with where the team is striving to go.

Next we moved on to team goals and defined our top 6 strategic goals for the year and their priority. In my opinion, that last bit is the key. If you don’t prioritize your goals then everyone on the team doesn’t know how to prioritize their work. I’m all about ruthless prioritization to ensure that everyone, including myself, are putting energy mostly into the tasks that are aligned to the strategy of the team or organization. Prioritization can be a difficult exercise when there is a lot to do or a big change to undertake but it is possible and it is very valuable and worth the effort.

The rest of the retreat involved outlining the tasks involved to complete each goal and then deciding on the first next step in each task. We finished the day spending time with the team we rely on the most, the lab programming team. Cross-functional interactions can be challenging for a number of reasons and having dedicated face-to-face time together to discuss challenges and successes makes a difference. (See here for another HBR article about team retreats: https://hbr.org/2018/09/stop-wasting-money-on-team-building.) My hope is that involving the whole team and our cross-functional partners in the process of shaping the change will result in increased commitment to that change. Only time will tell.

I know that none of this sounds like rocket science, vaccine development science, drug development science or otherwise but it does work and the research has borne that out. This journey is one where I am learning every day and increasing in my confidence as a manager every day. I do feel like I have a good set of tools at my disposal to aid in my success. More importantly, I have an talented and engaged team that has set a vision and is committed to reaching that vision. So maybe I don’t need a MBA after all.

What to do with a vial of blood?

You may have thought from the title of this post that I was going to post some vampire fan fiction. While this wouldn’t be the first time someone thought I was a vampire (that happened years ago collecting blood at night in Haiti for a lymphatic filariasis survey), that’s not really my thing. Last time I talked a bit about the differences between clinical data collected on the Case Report Form (CRF) and non-CRF laboratory data. For today’s post I’m going to walk you through the life-cycle of a specimen and how my team ensures that every specimen possible can be used for testing and subsequent analysis.

The life of a specimen starts at a local lab when the study protocol indicates that a sample is needed for particular testing at that specific visit. The vast majority of this decided ahead of time when the protocol is being finalized. There are specific tests that need to be run at specific time points, either before and/or after treatment or vaccination. For example, at the peak immunogenicity time point post-vaccination, there are specific immunological assays that have to be run to determine if the vaccine has elicited an immune response. For the sake of brevity for this post, I’ll defer discussions on what immunological assays are run for another post. Try not to be too overcome with anticipation.

The tube, or tubes, of blood collected at the clinic are sent along to a local lab to be processed and to have some safety labs run. You’ll remember from a previous post that the type of lab data that I will be opining/educating about is the non-safety lab data for clinical trials. Accompanying the vial(s) of blood is often a written form that includes an inventory of the vials in that shipment and some metadata surrounding the vial, including participant ID, visit number, visit date, specimen type, etc. Now, I want you to pay particular attention to this seemingly minute detail. Because now we have metadata for that specimen entered in the CRF (the lab tech had to check off in the CRF that the specimen was collected and that check produces metadata around the participant ID, specimen type, visit number, data and time collected for the specimen, and all of that is recorded and retained in the clinical database). We also have that metadata on the physical sheet that goes along with the specimen to the processing/local lab. One of the tenets of data management is that if the same information is entered in multiple places, there will likely be errors.

Right now our specimen (i.e. vial of blood) is at the local lab or processing lab to be processed into plasma or serum or cell pellets. Those blood products are aliquoted out and stored either at the local lab or often, at a repository. Now don’t think that all those little tubes are sitting in freezer boxes all nameless. All that metadata that was entered into the CRF and transferred to the lab form is now entered into a Laboratory Information Management System (LIMS). LIMS systems are used to manage all the information around specimens and assay results. If you’re keeping track of our specimen metadata, we now have metadata for the specimens in the CRF, on a physical form and in the LIMS. And every little aliquot (tube) that was derived from the single specimen has that same metadata associated with it.

Now a testing lab is ready to perform testing on a designated aliquot, as outlined in the protocol. The specimens are shipped to the lab with a shipping manifest that contains an inventory of the specimens in the shipment. The specimens’ bar codes are scanned into the receiving lab’s LIMS system and now the fun can begin. For those of you keeping score, the metadata around the specimen now resides in: 1) the CRF, 2) the lab form, 3) the LIMS installation at the processing lab, (4 the LIMS installation at the repository (if one is being used), 5) the LIMS installation at the central or endpoint lab…and a partridge in a pear tree. As you can imagine, having the specimen metadata replicated in all these different places can lead to errors occuring as a consequence of data transfers and being perpetuated through all the downstream locations. This is where my team comes in. We programmatically compare the specimen metadata in the CRF to the metadata in LIMS. The goal is to identify and correct all errors before the specimens are shipped out to the labs peforming the testing. In order to accomplish the daring feat of data management, we have a crack team of programmers supporting us and creating and maintaining the code that does the comparison and spits out reports with errors on it.

Of course, nothing is ever as simple as “generate a report and be done”. The lab data managers on my team work very closely with clinical sites and labs to determine the source of the error and what the definitive source of any given metadata is and to ensure that changes are made in all places where the metadata may be incorrect.

So way all this effort to ensure that a visit date for a specimen is correct? Does that really make a difference in the grand scheme of a whole trial? Channeling our inner consultants, let’s unpack that assumption. Due the complexities of participants that are on PrEP or the fact that HIV vaccines illicit anti-HIV antibodies, HIV diagnosis for clinical trials follows a testing algorithm where specific tests are dictated by the results of previous tests (confirmatory testing) or vist type in the study (i.e. before or after vaccination). This is actually done for HIV testing outside of clinical trials as well. There is a required confirmatory test if you test positive by a rapid test, the same way a woman would go to the doctor for a confirmatory pregnancy test. https://www.cdc.gov/hiv/testing/laboratorytests.html But I digress, as I mentioned the HIV diagnostic testing algorithms can differ by visit. If the wrong algorithm is run on a specimen because the visit number was incorrect in the metadata, it could lead to the wrong result for the participant. That’s obviously not something anyone wants to happen.

While that example is on the extreme end of the spectrum of what ifs, metadata errors for other values can lead to the incorrect testing being performed for other tests, which would lead to incorrect data ending up in the dataset for analysis. If the lab data are being used to evaluate study endpoints, the quality of the lab data is paramount. One of the main goals of my group is to make sure that the lab data used for analysis is as clean as possible and that each data point is a valid data point.

From an ethical standpoint, ensuring that each specimen collected from a participant can be used is critical. Clinical trial participants are a special breed of people who are willing to be part of these studies, sometimes not for immediate benefit to themselves but for the advancement of the science toward a cure. The whole study team is dedicated to guaranteeing that a participant’s involvement in a trial isn’t for naught. Our small contribution to that guarantee to try and make sure that any specimen they give as part of the trial is tested and that data used for analysis and that participants aren’t brought back for additional specimens uneccesarily because no one can find their initial specimen.

I hope that I have convinced you that specimen management is a vital part of the clinical trial process. Please add a comment if you have any questions about the process or why we’ve invested so much time and energy into it.

Up next time…I get back to my “how to run a team” posts with an update of a team retreat we just had.

Lab Data: The Special Snowflake of Clinical Data

We briefly discussed clinical trial data in the last post and the methods used to collect, clean and analyze the data, or at least where you can go find that information. Now we finally get to lab data. Lab data may straightforward, you get results from labs, you add it to the other data from the trial and you’re all set. That is not the case, however, for many trials. Lab data has some nuances to it that make it a bit of a special snowflake.

The decision of where lab data ends up as part of the clinical trial data has to do with 1) what type of lab data it is, 2) how the laboratory and testing structure is set up for the trial, and 3) what type of endpoints the lab data will be supporting.

Let’s start with types of lab data (cause, you know, starting with 1 is usually a good idea). In my organization, lab data is divided up between what we call safety lab data and non-safety lab data, fancy, huh? Safety lab data are the result of testing done on samples to “ensure that patients are not experiencing any untoward toxicities”. (Chuang-Stein C, 1998) These tests are usually ones that you would see at a doctor’s office, liver enzymes, white blood cell counts, etc. In the clinical trials that my organization supports, this lab data is entered into the CRF by testing labs connected with each clinical site or group of clinical sites. Entering safety lab data into the CRFs is industry standard as it keeps all the safety data availalble to be examined regularly to ensure the safety of the participants. The workflow for safety lab data is: a sample is collected from a participant at a visit to the clinical site, that sample is processed and sent to a local lab for testing. Results are sent back to the site, which enters them into the corresponding CRF for that participant and visit. The safety lab data is managed by the Clinical Data Managers (CDMs) for a study and quality checks and processing procedures are the same as the other data collected on the CRFs.

Non-safety lab data consists of lab data that is not generated in support of safety considerations. The spans a whole range of data, including immunogenicity data for vaccine trials and pharmacokinetics (PK) for drug trials. The tests for non-safety lab data can be performed at either local labs or central labs but the key is that the results are not sent back to the site to be entered into the CRF. This is because there are usually no reporting requirements for non-safety lab data. (If a participant has a low white blood cell count, for example, the site would be required to counsel them and perhaps refer them for additional testing). Since the non-safety lab data is not reported onto the CRF, it has to be uploaded to the data management center in some way, cleaned and quality checks performed and errors resolved and then the data are merged with the other clinical data for analysis. The distinction between CRF and non-CRF data is a big one. The CRF data is collected and managed in a Software As a System package (in our case Medidata Rave), that allows for creation of the CRFs, data entry, data cleaning and database creation all in a single, validated and maintained system. Data that comes into a data management center outside of the electronic data capture (EDC) or other CRF system does not have this built-in functionality associated with it, nor the infrastructure around it to make data creation, cleaning and storing relatively easy. Lab data is not the only type of non-CRF data and so these issues span other areas such as questionnaires, SMS or text data or participant dairies. Since I have absolutely no expertise in those areas, I’ll stick to the lab data. Developing the systems to import, process, store and distribute non-CRF data is a big undertaking and I will discuss some of the ways we do this in upcoming posts.

Lab data will be used within the context of a clinical trial, so many organizations opt to embed the lab data management within clinical data management. My organization has opted not to do this, though our processes are aligned with the clinical data management team. Part of the reasons why we have split out the lab data management from the clinical data management has to do with the two other features of lab data management that determine where the lab data ends up in the overall data of a clinical trial.

Workflow of the lab samples and testing may not, on the surface, seem like it would influence what happens with the data downstream but can have a big impact. As I mentioned above, there a few different set-ups for laboratory testing that I’ve seen with clinical trials, and probably endless combinations from there. One scenario is to have the samples drawn at the clinic and sent for processing to a local lab. That same local lab would then perform the safety lab tests, diagnostic testing and store additional aliquots of each sample. Those additional aliquots would then be sent to central or speciality labs for more advanced testing (i.e. immunogenicity or PK testing). In this scenario, the diagnostic test results would be send back to the clinic along with the safety lab results and reported on the CRF. In order to ensure quality and consistency across multiple labs, a selection of samples could be sent to a central lab to verify diagnostic status.

In another scenario, the samples are collected at the clinic, then sent to a local lab for processing and safety lab testing and then aliquots sent to a central repository. The aliquots of the samples would then be sent out to central or specialty labs for immunogenicity, PK or other specialized testing. Additionally, diagnostic testing can also be done by central labs as opposed to local labs.

So what are the implications of these different workflows. In the first workflow, all the safety results and diagnostic results would be reported on the CRF. Any specialized testing would have to be reported through another mechanism other than the CRF but done in a way as to make the results data compatible with the other data from the trial. This is where my team comes in. We receive specialized testing data, process it, resolve erros and create datasets for analysis. The same is true for the second workflow, with specialized lab results having to be sent to the data management center via a secure and consistent pipeline apart from the clinical data stream and my team receiving, and processing the data. A centralized diagnostic lab would have to report the data back to the sites to enter into the CRF in order to be able to give those results to participants. However, in the case of the diagnostic data that we handle from a centralized diagnostic lab, that data comes through my team first, where we perform quality checks and ensure that the correct testing has been done on the correct samples. So where the lab data is coming from influences how it becomes part of the overall data for a trial and who handles it along the way.

Up until now, the reasons why lab data can be unique have to do with the type of lab data being processed and the route by which the data came to the data management center. Taking these two characteristics together, you could still make the case that the lab data could all be reported on the CRF and handled by CDMs, which I stated earlier is how many organizations operate. The final consideration in this argument is what analyses is the lab data supporting (i.e. what type of endpoints will use lab data in the analysis). An endpoint for a clinical trial is defined as “a measurement determined by a trial objective that is evaluated in each study subject”. (Self, SG, 2004). Essentially, it’s what you are measuring your intervention against. Most endpoints for clinical trials are safety and efficacy focused and are called “clinical endpoints”, essentially, is the intervention safe and does it work in stopping or preventing disease. The key word there is “disease”. Aside from the safety measures we discussed above, the goal of a clinical trial is to ensure that an intervention works and in the world that I am in, “works” equals prevents HIV. So the endpoint of a clinical trial would be, does this intervention prevent HIV? That is over-simplifying by a rather big extent. There are different phases of clinical trials that have different purposes, the first of which (Phase I) is just to ensure that a product is safe in humans, and if it’s a vaccine that it elicits an immune response. But for now, “does it prevent HIV” is good. From the lab data perspective, traditional clinical endpoints are relatively easy. Safety data and diagnostic data are reported on the CRF so there is little to do that is different from any other data in the trial.

But what do you do if you’re researching a disease like HIV, or cancer, where the clinical endpoint can take some time to appear? Trials are long enough as it is and waiting a longer time until onset of disease can mean more time until a product is available. What if you are trying to improve on an already existing intervention? The existence of an already-licensed vaccine, for example, may mean that the incidence of that disease in the general population has been reduced such that a huge trial would be needed to get enough infected individuals to have a robust statistical analysis. These considerations, and others, have led researchers to adopt what are called “surrogate endpoints”. A surrogate endpoint is a “biomarker that can substitute for a clinically meaningful endpoint for the purpose of comparing specific interventions”. (Self, SG, 2004). In the vaccine field, these can be correlates of protective immunity or “biomarkers associated with the level of protection from infection or diseaes due to vaccination.” (Self, SG, 2004). The laboratory data that would support a surrogate endpoint or correlate of protective immunity would be the immunogenicity data that I refered to above, which potentially is not part of the CRF. Why does this matter? Data used to support primary or secondary endpoints in clinical trials is the data that is under the most scrutiny from a regulatory perspective. The prime objectives of the study are the ones that regulators are interested in and then there are always additional analyses done by researchers for more scientific reasons.

Ideally, you would want all the laboratories involved in a clinical trial to report results in such a way that the data is entered into the CRF. However, the logistics of this can be challenging, especially when surrogate endpoints are not already defind and there is a large amount of research going into new methodologies and laboratory tests to define those endpoints, which means lots of labs reporting data. This is where I would argue that splitting out the lab data management into its own team is important. While it would seem that having the CDMs handle all the lab data would be advantageous since they are familiar with data handling and having one team handle all the data is good from a consistency standpoint, I think there are more advantages with the split team set-up, and not just because I manage such a team. Having lab data separated out into its own team allows the individuals on the team to become highly specialized in handling a type of data that will not have the standardization or harmonization of the clinical data. For clinical data, there is the CDISC system, which provides a framework to harmonize data structures from data collection through dataset creation and into analysis. This same system does not yet exist for specialized laboratory data. There are lab data components within certain portions of the CDISC system, but it lacks the same infrastructure to assure standardization from data collection to analysis. Therefore, lab data arrives to the data management center in every sort of shape and format and we are responsible for putting it into a format that will fit the statisticians’ needs for analysis and fit into the CDISC structure used by the other clinical data. This is not a cookie cutter type of activity and having individuals that are trained on laboratory assays, in addition to data management, provides a more quality output, at least in my opinion. Also, having a team that is trained in the laboratory assays being used means that communication with the laboratories is smoother. My team can speak the same “language” as the labs and can help with data issues since they understand how the data was generated. Data management involves a lot of communication and cooperation to resolve issues with the data and having a specialized team helps. It also allows me to elevate the visibility of lab data within the organization. With surrogate endpoints becoming more and more frequent in the clinical trials arena, having lab data occupy the same strategic importance within an organization is advantageous from an operations and business perspective.

Whether or not lab data management is done by a separate team or the same team as the clinical data management, there are considerations that make lab data a bit of a special snowflake. The lack of one system to manage the data all the way through the trial (at least in some cases), the variability of the data and the lack of standars for non-safety lab data make this a dynamic and challenging field to work in. In upcoming posts, I will go into how my team manages the challenges.

The Art of Data Perfectionism

The title of this post includes the word perfectionism. The reasons why are elucidated below. Between when I started drafting this post and now I had some thoughts that I wanted to add as a preamble of sorts. I keep coming back to why I don’t post on the blog regularly. I could of course blame the fact that I work most nights after dinner, have a family, a social life and am currently taking a online course in Jira. But, as I’ve said before, we make time for the things in life that we’re passionate about and want to do. I am really passionate about this blog, so what’s the hold up? This might be the one area that where perfectionism is holding me back.

I’m not a perfectionist by trait. I’ve never used that as the answer to the, “Tell us a weakness” question on interviews. I firmly believe in ruthless prioritization and the 80/20 rule. Also, having been a research scientist I tend toward iterative creation, design, etc. Getting trained as a Scum Master was almost like second nature because of course you would design and produce iteratively, only putting into each development cycle what was really needed. So it’s a hard feeling to reconcile now, this perfectionism with the blog. It’s not like I have tons or even tens of followers so the fear of messing up should be low. Except that it’s not. This goes back to a topic I wrote about in another blog post (Identity Crisis). Having gone through the PhD process in the US and spending the majority of my career thus far in research science, I have this ingrained and ridiculous notion that only people who have studied something for their whole lives (or non-stop for 4 years) have the authority to speak about it. The culture of “elder respect” in research science is strong. I just haven’t gotten my head around the idea that not only am I qualified to talk about a range of topics due to my experience to date but that I am qualified to talk about clinical trials and data since I live that work day-in and day-out. I’m currently reading a book called, “Playing Big” by Tara Mohr which is a study on why women have a harder time “playing big”, so to speak, and what to do about it. I’ll let you know how it goes but hopefully one consequence of the process will be me getting my voice out there more.

Of course, the stakes for me with this blog are pretty low. The only real risk is a reputational one if I something wrong. In the world of clinical trials, the risks for inaccurate data can be much higher. (See what I did there, slick, huh?). The individuals who are on the front lines of keeping data quality high in clinical trials are clinical data managers and clinical data coordinators. These individuals are often certified and are, out of necessity, perfectionists. Every little detail matters when you’re setting and managing the data from a clinical trial, from the initial data entry forms to the dataset creation at the end of the trial and locking the database.

Clinical data management is the “collection, integration and validation of clinical trial data”. Done right, clinical data management can reduce the time to market for important health interventions by ensuring the generation and retention of high-quality, reliable and statistically sound data. (Krishnankutty, 2012). High-quality means that the data conforms to protocol specifications and that it contains little to no errors or missing data.

The process starts with the development of the protocol. For the uninitiated, the protocol is a document (often very lengthy) that describes how the trial will be conducted, and ensures the safety of the patients and the integrity of the data. Depending on the organization, clinical data managers are often involved at this early stage. From there, the clinical data managers are integral to setting up the study and how the data will be collected, including what checks will be done during the course of the trial to make sure that the quality and integrity of the data remains intact.

While the trial is ongoing, clinical data managers use a variety of tools to track the data, try and solve discrepancies in the data or find missing data and help to ensure patient safety. If this sounds like individuals have too much control over the data, rest assured that there are pages of regulations that govern operations of clinical trials and the data associated with them and clinical data managers are often at the front line of meeting those regulations.

So with all this to juggle and the results of a trial hanging in the balance, how do clinical data managers do their job. Having worked with them for over a year, i can tell you that they are very committed and very detail-oriented people. They also have fairly clear guidelines in the regulations for how the data should look, or how to ensure data quality (i.e. audit trails, etc). Additionally, there are several professional societies that offer certification, ongoing education and a community of practice. One such organization, and a good place to find information, is the Society of Clinical Data Management (SCDM). Www.scdm.org.

So why this whole post about first, my insecurities, and second, the briefest of overviews of clinical data management? With this post, I’m straddling the dual purposes of this blog; 1) To share my experiences as they happen and as I grow in my career; 2) To highlight the lab data management portion of clinical trials. This first post is to introduce the concept of data management as it pertains to clinical trials in the traditional sense. As I post more (which I will, I promise), I will contrast this to how lab data is viewed and managed in the context of clinical trials and hopefully how those practices can assist in non-clinical research as well.

What is Lab Data Management Anyway?

I thought that for this post, I would introduce the new subject on the blog, lab data management. The idea is that in addition to providing witty reflection on how I got to where I am in my career, I would talk a little more about what that career looks like.

Before I can get to my career and what I actually do (still trying to figure that one out), I should provide some background. Lab data management is a subset of clinical data management so I’ll start there. I am going to use the Wikipedia definition since I got rid of my encyclopedia set decades ago. Clinical data management is a set of processes and procedures that “ensure collection, integration and of data at appropriate quality and cost”. The goal of clinical data management is to generate high-quality, reliable and statistically sound data to ensure that conclusions drawn from research are well-supported by the data. So, no pressure…right?

In many clinical trial settings, both in-house and contracted out (CROs), lab data management is conducted by clinical data managers along with the management of all the other clinical data. There are only a few institutions that I’m aware of that separate the laboratory data. I should clarify that when I’m talking about lab data, I’m not talking about the safety labs done to monitor the participants during the course of the trial (white blood cell counts, liver enzyme tests, etc). Those are monitored along with the other clinical data, at least in our organization. Lab data for my team consists of the endpoint data (HIV diagnostic data), pharmacokinetic (PK) data for drug trials and a whole host of immunology assays that are being done to assess the immune response to vaccines.

So what do we do with the lab data?  I’m so glad you asked.  Lab data management for us can be grouped into two broad categories, specimen monitoring/specimen data quality control and assay data processing.  Specimen monitoring and specimen data quality control are essentially the same thing.  For the purposes of this post, I’ll call it specimen monitoring.  In all clinical trials, participants have specimens taken.  It’s usually blood draws but it can also include tissue biopsies, etc.  The metadata around these specimens can end up being entered in two different data streams, the clinical data stream (i.e. Case Report Form filled out when a participant comes in for a visit), and a Lab Information Management System (LIMS), which is filled out when the specimen is processed in the lab.  In order for the specimen to be used for HIV diagnostic testing or immunological testing, the metadata has to match in both places.  Let’s take the example of the HIV diagnostic testing.  There are algorithms for testing in HIV to determine not only if someone is infected, but if it is an acute or chronic infection.  HIV testing algorithms are not the same for every study.  If you are performing a HIV vaccine trial where the whole point is to elicit antibodies against HIV, you will have to have a series of tests to determine if the antibody responses that show up positive on a diagnostic test are vaccine-elicited or from actual HIV infection.  If you are testing a HIV prevention intervention, the testing algorithm will be different.  So if the metadata for a specimen at the time of draw says that this blood tube is from visit 4 from protocol 001, then the diagnostic lab knows what testing algorithm to run.  If, somewhere in the process of sending the tube to the lab and the transfer of information from the clinical database, to a specimen label or lab requisition form, to the LIMS, the metadata got changed to visit 4 from protocol 002, then the testing algorithm will be different.  This would render any data from that testing invalid. 

One whole scope of work for my team is to ensure that the metadata from a specimen remains correct throughout the course of the study, no matter what data stream that specimen appears in. We accomplish this by programmatically comparing the different data streams each day and issuing QCs when the data doesn’t match.  We then work with the labs and clinics to find the reason for the data discrepancy, the source documentation to determine the real value and to correct the QC.  This ensures that as many specimens as possible can then be used for testing.  Participants trust that when they donate blood or tissue, that it will be put to good use and we help to ensure that it will be.

The second large scope of work for the team is assay processing.  After clinical specimens have been processed and sent to labs for testing, we receive that assay data back into our group.  We again check to make sure the specimen metadata is clean and we also do additional quality checks to evaluate the data for format consistency, logic (if there is supposed to be a numeric value, we check to make sure the values are numeric), and some range checks and other assay specific checks.  This part of our work is important because not only do we want all specimens to be able to be used for testing, we want all the lab testing data to be used in the statistical analyses.  We provide consistently formatted and clean datasets to the statisticians for their analysis. 

In short, lab data management in SCHARP is a group dedicated to preserving high quality laboratory data for analysis in clinical trials by safeguarding the metadata around clinical specimens and providing consistent and clean laboratory datasets for analysis.  If you’re interested, I can go into more detail about how we do this in subsequent posts.I will definitely be doing more posts about why it’s important to think about data management, even in a research setting and discussing some methods and best practices for how to start implementing lab data management, regardless of the setting.

The Trough of Disillusionment

Wow, my last post was in May.  I suppose enough time has passed for this to be classified as the re-re-relaunch of the blog.  Here’s to hoping that the third time is the charm.  This is also a re-re-relaunch because I think I will introduce a slight change in scope for this blog.     I can’t believe so much time has passed since my last post.  Of course I always start these sorts of things with the best intentions and then, well, life gets in the way.  At least, that’s what they say right? I was listening to my newest obsession in podcasts. (For those of you who don’t know, I’m totally obsessed with podcasts and listen to them all the time). Anyway, my newest binge listen is Zig Zag (https://zigzagpod.com/) a podcast about women, entrepreneurship, and technology. I’m also making a plea hear for fellow listeners since I need, need, need people to talk to about this podcast since I love it so much. A recent episode got me inspired enough to start writing again. The episode was about the hype cycle. The hype cycle was developed Jackie Fenn at Gartner and it tracks the “cycle” of a new technology from inception through to adoption. This concept applies to new ideas as well. As I listened to the podcast and I listened to the host apply this cycle to everyday challenges and ideas I had a lightbulb moment (or epiphany for all you fancy people out there). Sure, life got complicated and harder than expected recently (more below) but something bigger was at foot with this blog. I was in the TROUGH OF DESPAIR (insert dramatic voice and music here). The trough of despair can be defined as when interest wanes due to failed experiments or implementation. At this point, stakeholders can drop-out and survival only happens if the product improves to the satistifaction of early adopters. So, here I am coming out (hopefully) of the trough of despair with an improved product and hoping that you (my early adopters) will approve of it.

But before I unveil the new and improved (at least content-wise), a bit of background is in order so please bear with me. Back at the beginning of this year, a series of unfortunate/fortunate (yet to be determined) events unfolded such that I ended up leading two teams at work.  Initially I stepped in to be the interim head of another team in the organization.  This team was newly elevated in a re-organization and now needed a senior manager and since those were in short supply at the time, we divided and conquered to cover the new teams that were created.  Since the newly elevated team was lab data management, it made sense for me to take it over with my background in the lab.  The team had been through a lot and was wary of the reorganization and had seen quite a bit of shifting leadership in the past few years.

So now I had two teams, one that needed someone to lean in and provide guidance through change and a clear direction and vision for the team moving forward and one that was expecting me to lead them through the execution of the goals we had set forth at the beginning of the year and there was of course the additional expectations of our partners who now interacted with me in two capacities.   This isn’t said to elicit sympathy or pity but just to say that I found myself getting a crash course in management and setting priorities in order to make sure I stayed on top of the most important things.

As I was treading water, trying to keep afloat learning two new teams, I found a moment to stop and think (yes, just one).  Was this actually an opportunity in disguise?  Could I even think about another opportunity in an organization that I joined only 5 months earlier? Here I was having just gone through a big job transition and I was contemplating another one.  How foolish could one person possibly be?  I was about to find out.

My boss was the one who initially floated the idea of me switching to lead the lab data management team.  I resisted at first and then I started thinking…a dangerous habit of mine…what if?  What if I could draw on all my skills, including my background in the lab?  What if I could take a team that needs direction and guidance and build something truly unique and special?  What if this is a huge mistake?  As the head of a statistical unit, I would be a known quantity.  I could go anywhere from here.  Companies are always looking for leadership for statistical groups.  I would be on a defined career progression for once.  If I took on this new team, there would be no such assurances.  There are very few other teams like it around.  I would have to, again, pave my own way and define not only what this position would look like for me but I would be redefining and building a whole team.   So I was faced with the decision to stay where I was (leading a great team and with a fairly clear path in front of me) or jumping into the unknown. What do you think I did? Of course, I jumped. Part of me really wishes I could be comfortable with the straight and narrow but I’m always one to be enticed by the road less taken.

So here I am, almost a year after taking on an interim team and 7 months after officially switching over to being the Senior Manager of Lab Data Management. I still have some residual duties on the statisical team that I am hoping to be done with this Spring so I can fully focus on one area in the organization and I really think I made the right choice. One unexpected consequence of this move has been that I’m now rethinking my career trajectory. Because, you know, I need more change. My new group is responsible for monitoring and maintaining the quality of the specimen data and assay data for the clinical trials. This taps into my deep-routed love of all things quality related and also has got me thinking about data quality in research in general, especially with data science being the hot new “it” field. Could I potentially become a Chief Data Office instead of a CEO? Was data my new passion or a resurrected passion? What would I have to do to gain some skills to match my new team and my new vision of my career path? That’s what I’m going to explore in the blog more now. How to get out of the lab and into another industry and how to keep thriving and learning and shifting to find what you really love. Also, I will be making informed and educated pleas and pushes for more quality control in research data, along with tips for how to do that. It will be probably my 5th or so re-invention but as I approach my 40th year, there’s nothing I want less than to be stagnant.

Identity Crisis – how to get over not being an expert

I noticed something interesting looking at the analytics for this blog.  (Sue me, I’m a data junkie).  It appears that my posts that ramble on about how I got to this position, or my existential struggles along the way get more hits.  I was initially a little surprised by this since I figured you, my lovely audience, were drawn here for useful tips but I see it’s just to witness my inner turmoil and angst.  Well I guess I should give the people what they want, after all I am an artiste, I mean a scientist, I mean a manager, shoot…what am I?

This is a question I ask myself fairly regularly, and one that several people have asked me in one way or another recently (usually under the guise of asking how I made the switch from a technical role into a managerial role and how I feel about it).  Am I still a scientist?  And if not, does that matter and how do I define myself now?

For so long, through my first job out of undergrad, through all my subsequent schooling and jobs, I’ve always considered myself a scientist.  Getting my first publication was a verification of that identity and it felt SO good.   I had worked so hard to get to that point, the point where I was a published scientist, one with a paper where I was the first author (trust me, this is a big deal).  It felt so validating.  It felt like the fulfillment of the dreams I’d had as a little girl playing with test tubes in the kitchen trying to make invisible ink.

There is also something that resonates with people when you tell them you’re a scientist, a respect that you can see reflected back in their face, and the instant recognition of what you do, or at least what they think you do.  There are many ill-conceived notions of what it means to be a scientist but its still way easier to explain that being a program manager or something of that kind.

Looking back, it was probably too big a part of my identity.  Science became all-encompassing nin an eat, sleep, dream about it kind of way that threatened to drown out other interests that I’ve picked up along the way in life.  However, I’m assured from grad school colleagues that this is quite normal..

Given that I never planned on continuing as a lab researcher when I was done with my PhD, I suppose I shouldn’t be surprised that this question has come up. I don’t think I was surprised that the question arose but more surprised about how I felt about the answer. I suppose I always thought I would be allowed to keep the scientist label, or least allowed to think of myself that way, as I progressed in my career.  I’m noticing now that it is not really going to be that way.  Having stepped away from the lab without doing a post-doc (or 3), becoming a professor and publishing even more papers and switching fields to boot, means that I even though I have those 3 letters after my name, I will not be viewed as a scientist by the scientific community.  The question now becomes not so much, how do I feel about stepping away from the technical work but how do I feel about other people’s perception of my scientific identity?  Do I need to be a technical expert to succeed?  I’ve given quite a bit of thought and here’s what I’ve come up with.

Despite the recent shift in thinking that technical expertise is essential for good management (cite HBR article), I now know I can be a good manager without that.  I’m actually a better manager than I ever would have been a lab scientist (see article about why I decided to leave the lab). I also know that individuals with deep technical experience can be great managers. So in thinking of success in terms of my ability to be a good manager, being a scientist doesn’t have any bearing.

The second aspect is my relationships with others in the workplace, especially with external partners.  I have to say that there is some impact here.  This one is harder to tease out though.  Since I switched areas of research some of my inexperience is due to new subject matter. I have come across some resistance to my presence in my position because I’m not a statistician or not an HIV scientist but that has not been the norm and since my last position was in an area I didn’t know well either, I’m used to doing the extra research and asking the right questions to get the information I need.  So I would call this one a draw.

Finally, there’s my own perception of my identity.  Why is it so important to me to say that I’m a scientist?  Is it because it was a dream of mine for so long and I can’t let that go?  Is it because it’s easier to explain to immigration officials in other countries?  Trust me, trying to explain what a program officer does or a senior manager does just results in more time going through the line, especially when you just got off a long flight (“no, I just give away money for other people do the science”, “no it’s a team of statisticians, you know, number crunchers”).  Having done some self-examination for a while, and in my more honest moments, I know this one comes down to ego.  I like the prestige of being a scientist.  I like the response I get from people when I tell them that’s what I do. And it’s time to let that go.  I didn’t get into science to feel good when I told random strangers my profession.  Years of living in Washington DC, where your value is tied to who you work for, have be de-programmed from my brain but I think I’m making progress on that one.

Now, when I’m asked, my response is that I made a conscious decision to step away from the technical lab work.  I did it because it best reflected both where I wanted my career to go and what my strengths are.  Yes, I still miss it sometimes, and I still get to be immersed in science for my job, I still get to interact with brilliant people and talk about the details of antibody binding (insert happy dance).  Also, I’ve done my research, I have contributed to the scientific annuls.  No one can take that away from me.  I will always be proud of that.  However, I also get to have a new kind of pride, the pride of managing a team of driven, dedicated and smart individuals all working to a common goal and that shared pride in a shared goal is even better.

My conclusion is that I may not be technically a scientist anymore and that is OK. So much has changed for me in the past few years and I’ve taken on a number of new identities that have nothing to do with my career (wife and mother, for example) so I think I can let one go.

I’d be really interested to hear what you think.  Do you still consider yourself a scientist?  Do you wrestle with a new identity?  Am I just over-thinking this whole thing? (A distinct possibility)  Will I ever write a post that doesn’t contain too many parentheses?

To Do:

To_Do_List

I live by my To Do lists.  During the craziest times in my Ph.D. I had my To Do list broken down not only by day but also by time of day.  It sounds a little, or a lot, hyper-Type A, but it kept me calm in the swirl that was getting my dissertation research done.  I’ve always relied on actual written lists.  A recent article in the New York Times (https://www.nytimes.com/2017/11/22/business/laptops-not-during-lecture-or-meeting.html) highlighted the advantage of writing over typing in terms of retention in our memories.  To me, there’s something about the tactile pleasure of checking an item off my To Do list (always with a different color) that is motivating.

Recently though, my tried and true strategy has been failing me.  I think that there are two reasons for this.  One is that I haven’t had time in the past few weeks to update my To Do list which means I definitely haven’t had time to get through the list.  Secondly, which is a consequence of the first problem, is that I constantly feel as though I’m just fighting fires instead of having time to sit and think about the larger items that need to get done or review my notes from meetings or piece together various bits of information I’ve gotten throughout the day.  Compounding this issue is that I’m still trying to learn both what the organization does and how to be a manager.  I know, I know, waa, waa, waa, complain, complain, complain.

We’re all super busy.  Every manager has days that are packed with meetings and less and less time for reflection and to gather thoughts. Everyone feels underwater as they adjust to a new job.  None of this is front page news.  I have felt some of this before in the many transitions I’ve made over the years.  I feel that the magnitude of this transition is what is overwhelming for me.  Currently, there are two main pain points for me in the transition, learning and time management.

I’ll tackle the learning component first.  I’ve heard the first six months to a year at an organization described as “drinking from a fire house” and it can definitely feel that way.  My current struggle is trying to find the line between how much I need to learn about the technical details of what my team is doing and how much I need to learn general management skills.  Now, I know the answer to this question.  I need to focus way more on learning general management skills.  I have a team of middle management that are very skilled technically that I can rely on and others in the organization that I can go to with questions.  It is much more important that I learn how to manage and lead. While I know that is true, every single scientific bone in my body is saying “you have to be a technical expert.  That is the only way people will respect you”.  That message was driven into me so many times throughout my career so far that it’s a hard one to silence now. It is a message that I still get.  Even though I have a PhD, the fact that I don’t know all the ins and outs of this particular field still results in some skepticism that I can feel and that sometimes is voiced.  The result is me feeling pushed and pulled in different directions and spinning my wheels instead of focusing my energy into the activities that will most likely result in my success. Aside from talking to mentors, my boss, etc to get feedback on what I should be focusing on (something I do regularly), one item that has helped as been “The First 90 Days”.  I really can’t say enough about this book and since this is a new blog, you can rest assured that I’m not getting paid to promote it.  Even if you have been in your current job forever, you should read this book.  It is a super practical guide to how to set yourself up for success coming into a management position, what to focus on, how to go about learning, etc.  I read the whole thing through and now I’m going back through to implement certain parts……if only I had the time.  It does advocate for learning only what you absolutely need to know to effectively manage your team, and contribute to the organization.  This has been and will continue to be a difficult lesson for me as a battle my inner scientist screaming to be the smartest person in the room.  Swallowing the ego would be so much easier if it wasn’t quite so big.

Which brings me to the second struggle, time management.  I used to watch the managers and leaders I worked for flit from meeting to meeting, the better ones always being on top of what the meeting was for, always having germane and insightful input, and never seeming to have time to do anything else. And, I still wonder about that only now I’m locked in that cycle too. My days are packed with meetings.  I filled my office with things I like to make it a nice work space and I’m rarely there.  There are several solutions to this such as blocking out time on my calendar, answering emails before bed, etc that are fairly easy.  I think what I’m struggling with more is how to time manage my brain.  How to be able to shift gears relatively quickly.  How do I go from meetings, to down time, to meetings again and retain everything, synthesize everything, pull it all back up together into an overall vision for my team?  These are skills and strategies that good leaders learn along the way and I’m at a loss for how to learn them. I’ve been toying with the idea of hiring an executive coach for some time now and I’m thinking more and more that that is the way to go.  I will of course continue reading books and articles (Harvard Business Review is a fav) and attending courses at work but I think I need individual coaching from someone outside my organization, someone who has worked with other professionals before and can provide an objective opinion.  I will of course continue to use my tried and true strategy of To Do lists but I know now that, as with much of what I carried into this job with me, they are not enough.  That is one of the chief lessons of “The First 90 Days”, the skills and qualities that got you to this new management position are not enough to allow you to succeed in it.  It’s more than a bit sobering and I’m hoping it will also be motivational for me soon too.

So no snappy things I’ve learned bullet points to end this post.  Just a plea for help/suggestions.  If you have any tried and true organization/time management/how to restructure your brain tips you use, I would love to hear them.