Social distancing requirements based on COVID-19 have left many sponsors looking for alternatives to in-person study visits for both on-going clinical trials and clinical trials planning to start soon. As we’ve been having conversations with sponsors about their options, we’ve noticed some confusion in three related but different terms—centralized review, centralized scoring, and remote assessment.

Centralized Review

In centralized review, a rater at the site administers and scores the assessment. These assessments could be delivered through eCOA, paper and pen assessments, during an on-site visit (most common), or via phone or videoconference. Following the administration and scoring of the assessment by the site’s rater, a third party (in our case VeraSci) would review the assessment. This review could include data checks based on expected values, review based on audio or video recording, and checking for completeness. The reviewer is a certified rater.

Centralized Scoring

In centralized scoring, a rater at the site administers the assessment. The assessment can be delivered through eCOA, paper and pen assessments, during an on-site (most common), or via phone or videoconference. Following the administration of the assessment, an experienced clinician or data monitor (depending on the assessment in question) scores the assessments. Centralized scoring can increase data quality for assessments with complex scoring rules, like the Brief Visuospatial Memory Test (BVMT).

Remote Assessment

Remote assessment, sometimes referred to as remote administration or centralized assessment, involves rater administering an assessment from a different location than the subject being assessed. The subject could be at a clinical site, at home, or in a healthcare setting like a hospital or a nursing home. Depending on the assessment, the mode of administration could be phone, videoconference, email, or some combination. Remote assessments make sense not only during social distancing but also in cases where sites may not have qualified raters available. This is common in some rare disease trials. Using a smaller pool of remote raters also can increase data consistency and reduce variability between sites. Additionally, there is some evidence to support that using remote raters can mitigate the placebo effect. Finally, rater training costs can be reduced, or in some cases eliminated, when the centralized raters are already trained on the assessments in use.

Contact us to learn more about how centralized review, centralized scoring, or remote assessment could be used in your next trial.

The COVID-19 pandemic and associated social distancing measures are creating unprecedented challenges for everyone working in clinical trials and drug development. In this series, we’re sharing some of the ways VeraSci is addressing these challenges.

With many subjects isolating themselves at home and many sites concerned about seeing subjects in-person for routine study visits, we’re seeing an increased interest from sponsors in remote assessments. For some on-going trials, conducting remote assessments may be a better choice than not collecting any data because of missed visits. In other cases, sponsors had planned to start studies soon and are now considering site-less or virtual trials as a way to keep their development programs on track.

The good news is that in many cases there are good options that do not require cancelling or delaying trials. We’re actively working with a number of sponsors to recommend ways to keep their trials and development programs moving. Each trial is unique and will call for its own detailed plan. Here are some examples of the questions we are hearing and some of the key considerations for getting these plans to work well.

Cognitive assessments are one of the most frequently requested type of assessments, and traditionally the majority of these have been administered by a clinician in an on-site visit. This is a complex topic, but we wanted to share some of what we’re considering when it comes to some of the most popular cognitive assessments— like the Alzheimer’s Disease Assessment Scale—Cognitive Subscale (ADAS-Cog), the Brief Assessment of Cognition (BAC), and the MATRICS Consensus Cognitive Battery (MCCB)—and specific cognitive tests like the Digital Symbol Substitution Test (DSST). While EMA and FDA have indicated a willingness to be flexible, this is an emerging situation, and this blog reflects our most recent thinking on the topic.

One of the most significant hurdles in remote cognitive assessment is related to patient populations and their ability to access and use the technology involved. The trials where these assessments are used may include geriatric patients not comfortable with technology, cognitively impaired subjects who will have difficulty following and remembering directions related to technology, and subjects whose mental illness may mean that they don’t have access to technology (for example in patients with schizophrenia). The level of caregiver support that subjects have is a significant factor to consider. Caretakers already play an essential role in the ability of many of these subjects to participate in a trial. In some cases, we may need to look at whether caretakers can assist subjects with the set-up of remote assessments.

Many assessments include multiple subtests from which a composite score is created. For example, there are 10 tests in the MCCB. Some of the tests can be more easily administered remotely than others. For example, a test where the subject is asked to name all of the words they can think of that start with a particular letter could easily be done over the phone or videoconference. On the other hand, a test where the subject needs to manipulate physical items may not be easy to replicate in a remote setting. In these cases, kits can be delivered to subjects. Some tests require drawing. The rater can observe the subject drawing, capture an image, and have the original mailed to the site.

For each alternative assessment option, we have considered the perspectives of patients, raters, sponsors and regulatory agencies whether the alternative is feasible and if the modifications would render the alternative invalid. Additionally, when appropriate we’re creating an alternative composite score if one or two of the tests were not administered. In some cases, test developers provide information about using the assessments in a remote setting. For example, the Montreal Cognitive Assessment (MoCA) has been validated for use in two formats. An abbreviated version can be administered over the phone while the full version can be administered over video conference. In most cases, we’re finding that there are paths forward that will produce valid, meaningful data.

Technology is also an important consideration. In some cases, we can extend existing technologies that are already in use in a study. We may also need to acquire additional tools such as telemedicine videoconferencing systems that meet the necessary regulatory requirements for use in clinical development. We need to determine how subjects would access technology at home. Bring Your Own Device (BYOD) will be faster and less expensive to implement, but it makes a lot of assumptions and creates a lot of variance. Subjects don’t necessarily have appropriate hardware and may not have sufficient internet access. Our tech support staff will need to provide support for BYOD devices that they don’t have access to or experience using. Leveraging subject’s personal devices also means we won’t have control over what is done on the device.

Provisioning devices eliminates many of these issues but will be costly and time-consuming. Provisioned devices allow control over screen size, what other software is installed and in use, and will enable us to provide internet access through a cellular connection for subjects that need it. It also means that our tech support staff will know a lot more about the devices in use and how they are configured, allowing them to provide a more seamless support experience. In some instances, we may end up with a hybrid solution where some subjects and sites go with a BYOD model, and we provision some additional devices to individual sites or subjects. Making the best decision for each study requires close communication and coordination with sponsors and sites. While the technology issues are complex, they are also solvable. In our experience, technology alone hasn’t been a reason to halt or delay trials.

Training raters is also a consideration. Delivering rater training remotely isn’t a significant issue. It’s something we already do. However, if any modifications are being made in order to administer training remotely, raters may need supplemental training. When it comes to assessments that were not designed for remote administration originally, new or updated manuals are needed. It also makes sense to add audio or video surveillance for studies that weren’t using it previously to allow centralized reviewers to confirm that remote assessments are being conducted consistently and properly.

Do you have questions about how to make remote cognitive assessments a reality for your trial? Contact us for more information.

Multiple sclerosis is a complex disease that presents numerous challenges in selecting appropriate measurements for a given clinical trial. These challenges can include:

  • Handling inflammation and relapses versus degeneration and progression
  • A lack of understanding in the underlying pathophysiology
  • Therapies designed for delivery early in the disease intended to prevent disability later
  • Symptoms that don’t directly correlate to disease activity.

Because of the wide array of challenges, it can be tough to determine what should be measured as part of a clinical study.

One of the first steps in selecting an outcome or endpoint is understanding who the study is trying to convince. Neurologists, patients, regulators, and payors are all going to have a different perspective.  Neurologists prefer measurements that they are familiar with, like the Expanded Disability Status Scale (EDSS). For patients, it’s important to demonstrate that the therapy will make them feel better and improve their quality of life.  Regulators want to see efficacy demonstrated through validated scales and a well-established safety profile. Payors are interested value and cost-effectiveness.

The type of therapy and the trial design will also influence which assessments and outcomes measures are most appropriate. MS can present with a broad range of symptoms, and it may make sense to only measure symptoms that the therapy is expected to improve. The duration of the trial also needs to be considered. Some endpoints aren’t likely to show a clinically meaningful difference in short duration trials. The control arm of the study is also a consideration. Placebo controlled trials are likely to show a larger treatment effect, so if a comparator control is needed, you may need a more sensitive instrument.

There are a number of potential endpoints that can be used in MS studies. The table below summarizes some commonly used assessments.

Assessment Description
Expanded Disability Status Scale (EDSS) Quantifies disability in MS patients and can be used to monitor changes over time
Symbol Digit Modalities Test (SDMT) Cognitive measure sensitive to slowed processing of information common in MS
Timed 25-Foot Walk (T25-FW) Measures mobility and leg function based on a timed 25-foot walk
9-Hole Peg Test (9-HPT) Quantifies upper extremity function
Low Contrast Letter Acuity (LCLA) Assesses visual disability in multiple sclerosis
Annualized Relapse Rate The average number of relapses a group of patients in a study have in one year
Magnetic Resonance Imaging (MRI) MRIs can be used to measure brain volume loss, inflammation, lesion load, and lesion activity. Typically used in conjunction with clinical measures.
Optical Coherence Tomography (OCT) Rapid, inexpensive imaging technique measuring retinal thinning which is correlates to both visual function and global MS disability scores.
Evoked Potentials Assesses the speed of message delivery from the sensory nerves to the brain. Often used in the diagnosis of MS.
Multiple Sclerosis Impact Scale (MSIS-29) Patient-based outcome measure of the impact of multiple sclerosis
Multiple Sclerosis Rating Scale Revised (MSRS-R) Patient-reported assessment of functional status
Multiple Sclerosis Individual Outcome Assessment (MSIOA) Monitors patients’ perspective on how much emotional or psychological distress their symptoms cause them
Multiple Sclerosis Functioning Composite (MSFC) Reflects varied clinical expressions of MS

Traditionally, phase III trials have used EDSS and relapse rate as primary measures with various MRI scan measures used as secondary outcomes. The EDSS is the most widely used assessment and is also often used as part of the inclusion criteria for a trial. However, it does have limitations, which increasingly leads researchers to consider other measures or additional measures. The EDSS is not particularly sensitive and it can be difficult to show a significant change over the duration of a typical clinical trial. Additionally, a number of domains like cognition, mood, and quality of life are not assessed, yet improvement in these areas is very important to patients.

Relapse rate is another traditional outcome measure. Relapse rate is effective at demonstrating short-term efficacy, but relapse rates only partially correlate with worsening disability over time. There are a number of newer assessments such as the MSIS29, MSFC, and the MSIOA that are designed to provide more information on the domains that aren’t captured by the earlier assessments and that reflect patient concerns about activities of daily living and quality of life. The MSIOA was developed to address the need for a patient-centric account of their symptoms. It captures the patient’s perspective on the amount of emotional or psychological distress their symptoms cause. Patients, regulators, and payers increasingly are interested in patient-centered symptom outcomes in addition to traditional measures of disease activity.

Depending on the therapy, the intended population, and stage of development there are numerous assessments which may be considered for clinical trials in MS. Need help determining the right outcomes assessments and endpoints for your program? Contact us.

References and Additional Information

National Multiple Sclerosis Website–

van Munster, C.E.P., Uitdehaag, B.M.J. Outcome Measures in Clinical Trials for Multiple Sclerosis. CNS Drugs 31, 217–236 (2017).

Gray, O., McDonnell, G., & Hawkins, S. (2009). Tried and tested: the psychometric properties of the multiple sclerosis impact scale (MSIS-29) in a population-based study. Multiple Sclerosis Journal, 15(1), 75–80.


The COVID-19 pandemic and associated social distancing measures are creating unprecedented challenges for everyone working in clinical trials and drug development. In this series, we’re sharing some of the ways VeraSci is addressing these challenges.

With many subjects isolating themselves at home and many sites concerned about seeing subjects in-person for routine study visits, we’re seeing an increased interest in remote assessments. For some on-going trials, conducting remote assessments may be a better choice than not collecting any data because of missed visits. In other cases, sponsors had planned to start studies soon and are now considering site-less or virtual trials as a way to keep their development programs on track.

Many CNS clinical trials include motor assessments as either an outcome measure, like in many Parkinson’s disease trials, or to detect motor side effects like tardive dyskinesia. One newly developed and exciting scale for remote motor assessment is the Remote Movement Disorder and Assessment Scale (RMDAS). Unlike many other options, the RMDAS was developed specifically for remote assessment.

The RMDAS allows the rater to complete two standard scales—the Abnormal Involuntary Movement Scale (AIMS) and the Barnes Akathisia Rating Scale (BARS)—along with a new scale, the Remote Extra Pyramidal Assessment Scale (REPAS), that replaces the traditional Simpson Angus Scale (SAS). The RMDAS is administered by videoconference and includes instructions to help the subject position the camera (any device with a camera will do), set-up the room (a chair without arms a measured distance from the camera), and app.

The RMDAS removes the need for the patient to visit a clinic while still allowing raters to assess:

  • Extrapyramidal signs and symptoms (slowness, rigidity and, tremors)
  • Tardive dyskinesia
  • Akathisia
  • Tremor

As always, there are no one-size-fits-all answers when it comes to choosing the right scales, outcome assessments, or endpoints. For each trial, you must consider the indication being studied, characteristics of the investigational product being assessed, the patient population being studied, and more.

Additionally, while recent guidance from the EMA and FDA reflects a willingness by regulatory authorities to be flexible, it won’t be the Wild West. VeraSci has extensive expertise and experience working with the RMDAS as well as other motor assessments. Whether you are planning a trial starting soon or trying to decide what to do with an on-going study, we can provide insight into the best options for your trial.

Facing the Challenge: Remote Assessments for Clinical Trials During the COVID-19 Pandemic

The COVID-19 pandemic and associated social distancing measures are creating unprecedented challenges for everyone working in clinical trials and drug development. We wanted to share with you how VeraSci is confronting some of these challenges.

The first topic we want to discuss is transitioning to remote assessments for on-going studies. With many subjects isolating themselves at home and many sites concerned about seeing subjects in-person for routine study visits, on-going clinical trials will need to make difficult decisions about whether to skip scheduled visits, try to conduct visits remotely, or in some instances, delay or cancel the trial. Many of our trials are using a combination of patient-reported outcomes and clinician-administered assessments designed to be administered on site. At present, we are tackling these issues on a study-by-study basis, and while there are no easy answers, there are some common themes that have been bolstering our contingency plans. Here are some general tips:

  • Read the regulatory guidance.  FDA, EMA, and MHRA have all just issued guidance for clinical trials to address the challenges of the day. As we do, we recommend that all contingency plans start by carefully considering the advice in these documents.
  • Patients aren’t the only ones staying home. Depending on the region and institution, raters, investigators, and other site staff may also be working from home. What do we need to do to support them? They may need additional equipment, training, and technical support to succeed. We can deliver standardized remote rater training and have an extensive training help desk that can support assessment administration.
  • Remote assessment is possible! As of March 17, 2020, the Centers for Medicare and Medicaid Services (CMS) have agreed to pay for various forms of tele-medicine, including tele-neuropsychology. Keep a look out for a future post from us that addresses specific scales and assessments that can be delivered remotely. We found that in some instances, even when we haven’t used a particular assessment in a remote setting, someone else has. For example, several dementia studies are using the Alzheimer’s Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and there is a literature to support its use as a remote tool. At this point we do not know if regulatory agencies will require validation for online administration of otherwise validated tools. We think some tools, like the Montgomery-Âsberg Depression Rating Scale (MADRS) and the Positive and Negative Syndrome Scale (PANSS), will be straightforward to implement remotely as they simply require that a rater conducts an interview. Performance-based assessments, like the ADAS-Cog, will be more challenging, but we have worked out strategies for most components of the tool such that proxy scores can be calculated.
  • Prepare to provide increased tech support. Many of our contingency plans will include deploying new technologies, extending existing technologies to new locations and environments, or a combination of both. We have invested in excess support capacity and are prepared for a significant increase in the volume of support calls. We are training our support staff to handle new types of questions that will arise. With all of the new challenges site staff are facing, the last thing they need is to have a frustrating tech support encounter. We will have humans with broad expertise making sure that trials can stay active. We resolve questions 98% of the time on the phone the first time.
  • Consider the challenges posed by individual assessments. In some instances, translating an assessment to an electronic format is pretty straightforward. For other assessments, the translations aren’t so obvious or may not be possible. For example, some assessments require subjects to physically manipulate objects. Is there an electronic equivalent? Do you need to send some sort of kit and then observe by video? After adapting over a hundred assessments to an electronic format, including the Brief Assessment of Cognition (BAC), our scientific and technology teams understand what needs to be done to collect valid data. We have been applying innovative operational strategies developed by neuropsychologists to meet the current need.
  • Hybrid approaches to remote assessments may be needed. Because some assessments either already can be administered remotely or can easily be converted to remote assessment but others can’t be administered remotely, you may have to decide whether a hybrid solution is feasible. For example, one of our trials uses the MATRICS Consensus Cognitive Battery (MCCB) and an interview-based assessment, the Schizophrenia Cognition Rating Scale (SCoRS). The SCoRS can be easily administered over the phone. Some of the MCCB tests can be administered remotely with audio only, while others require video input.
  • Start thinking about data quality and consistency issues. Making changes to the way assessments are administered mid-study (and in some cases making changes to the assessments themselves) will undoubtedly create issues with consistency of the data. While this is something we would never do under normal circumstances, under current conditions we all have to find ways to adapt, and regulatory guidance permits us to be creative. Even doing the best we can, we must consider the impact this will have on the data. Consulting with experts experienced in the complex analysis and management of data is a must have. Management of missing data and definition of intercurrent events may include COVID-19-dependent switches in the method of assessment that we will be applying based on forthcoming guidance from regulatory agencies.

We will continue to post about how we are facing these challenges in the coming weeks as the situation develops and as we learn more. We hope this is useful and look forward to hearing from you about the issues you face and the approaches you are taking.

Many sponsors and CROs aren’t sure what to expect when it comes to the cost of an Electronic Clinical Outcome Assessment (eCOA) platform, but the advantages of using eCOA are well documented. While upfront costs are a substantial consideration for sponsors and CROs, it’s important to remember that, overall, eCOA can decrease costs through a reduction in on-site monitoring time, less data cleaning, and improved patient retention. Costs can vary substantially based on the platform selected and the nature of the clinical trial.  Here are some key cost drivers that will impact the cost of eCOA for a given study:

  • The number of sites is one of the most important cost drivers because it touches almost all components of the budget—the number of devices that will be needed, the number of people that will need training, and the amount of support that will be needed, to name a few. Additionally, if some high enrolling sites require multiple devices, that will increase costs.
  • The type of device used will also influence the total cost. Tread carefully, though—selecting a lower cost device at the outset can lead to hidden costs down the road. Replacing devices can be expensive, especially if it means that patient visits are rescheduled because a device isn’t available. Lower quality devices can also lead to investigator dissatisfaction, so it is important to balance device cost, quality, and desirability.
  • The number and complexity of assessments will impact the cost. There will be costs associated with licensing assessments, implementing the assessments on the device, and conducting data review. Using an eCOA provider that also manages licensing can simplify the start-up process, creating both budget and timeline efficiencies. Consider carefully whether each assessment selected is truly needed. This will not only control the eCOA associated costs, but it will improve patient engagement and retention. Complex assessments may require additional set-up and development time.
  • New or unique assessments will increase costs and may increase timelines. Existing assessments typically can be easily deployed and configured, but new or unique assessments require additional development, testing, and validation. If you are considering a novel endpoint or assessment, consider using an eCOA provider with in-house expertise in developing and validating assessments and endpoints.
  • Geographic distribution of sites will affect the eCOA budget, particularly the number of countries and languages involved. For each new language, assessments and supporting materials will need translation. It may also require multi-language technical support.  For certain assessments, cultural adaptation may be necessary to generate consistent results across cultures. eCOA suppliers with in-house translation capabilities may be more cost-effective and able to deliver on tighter timelines.  Some countries will have import and logistics requirements that can increase costs.  Global eCOA suppliers can provide insight on which countries have additional logistical hurdles and can help you overcome those hurdles.
  • The amount of data review needed will influence the overall budget. The volume of data review depends on the number and type of assessments and the number of subjects.  Additionally, when it comes to data review, the criticality of the data from the assessments is a consideration.  Data collected through eCOA used in a primary or key secondary endpoint or to evaluate safety requires more scrutiny than exploratory data or data used to support subject eligibility. An experienced, knowledgeable vendor can offer a risk-based approach that maximizes quality with a cost-effective solution.
  • Audio and video recording are important features that can improve data quality and can be used to examine whether entries were correctly recorded in the event of outlier data and to ensure assessment (rater) standardization across trial sites. Both deployment of these features and data review associated with the feature’s use may increase the cost, so consider where and how you choose to use it. The features can also be used to provide oversight and feedback to raters and determine which raters may need re-certification. Additionally, the sentinel effect, the penchant for people to perform better when they know they are being evaluated, may improve rater reliability.
  • Study duration will affect cost as well, with longer study durations corresponding to increasing costs, especially with regards to on-going support costs. Expedited start-up times will often incur additional charges.

Like many eClinical technologies, eCOA does add to upfront costs; however, in most cases, those costs are offset by enhanced data quality, speed to database lock, and downstream cost savings. Are you considering eCOA for your next trial? Learn more about Pathway eCOA or contact us for a customized estimate.