We will have limited operations from 15:00 Wednesday 24 December 2025 (AEDT) until Friday 2 January 2026. Find out how to contact us during the holiday period.
Clinical evidence for medical devices can include data regarding the usage, or the potential benefits or risks, of a therapeutic good derived from sources other than traditional clinical trials, including Real World Evidence (RWE) and patient reported outcomes.
The definitions below are taken from the US Food and Drug Administration (FDA) and previous TGA guidance documents.
| Word | Meaning |
|---|---|
| Real World Data (RWD) | data relating to patient health status and/or the delivery of health care routinely collected from a variety of sources.”[1] |
| Real World Evidence (RWE) | “the clinical evidence about the usage and potential benefits or risks of a medical product derived from analysis of RWD.”[2] |
| Patient Reported Outcome Measures (PROMs) | “patient outcomes data, reported directly by patients that can be interpreted as information that captures patients’ experiences, perspectives, needs and priorities.”[3] |
This document follows the 2021 TGA review into RWE and PROMs in the regulatory context.
The 2021 review identified both internal and external ambiguity surrounding the current usage of RWE and PROMs, which limited their adoption for regulatory decision making.
This guidance document was prepared to provide additional information regarding how and when RWE can be submitted to the TGA.
This guidance also aligns with international frameworks in use by other regulators such as the FDA guidance on RWE and the work of Health Canada in optimising the use of RWE to inform decision making.
The increasing use of technology in healthcare, such as electronic health records and the introduction of wearable devices, has led to the accumulation of a wealth of RWD that is becoming more accessible for research purposes, including for characterising the safety and performance profile of medical devices.
Although RWD was not originally collected in the context of a highly controlled clinical trial,[4] it can still be valuable if it is deemed to be of sufficient quality and the statistical methods employed to analyse it and produce the RWE are scientifically valid.
Apart from registries that are governed by academic institutes designed for research purposes, the majority of RWD is not explicitly collected for the purposes of research.
While this type of RWE has typically played a prominent role in post-market surveillance of medical devices, there is increasing recognition of its potential utility in supporting premarket approval decisions, in addition to traditional clinical trial data.
As the process of clinical evaluation and generating evidence of compliance with the Essential Principles is required throughout the life cycle of medical devices, RWE provides a potential source to demonstrate some aspects of safety and performance to fulfil regulatory requirements.
This guidance intends to outline for sponsors and manufacturers the types of RWE relevant in the Australian context and the TGA’s process for assessing RWE, including limitations of RWE compared to traditional clinical trials for safety and performance.
If a sponsor intends on including RWE in a regulatory application, the TGA encourages engagement via a pre-submission meeting to ensure the proposed study design and analytical plan is appropriate.
This guidance represents the TGA’s current thinking on this topic. It does not establish legally enforceable responsibilities and should only be viewed as recommendations.
In addition to this guidance, the Department of Health is undertaking a further review to optimise RWE to support health technology assessment in Australia.[5]
The TGA recognises that the use of RWE in regulatory submissions is a rapidly evolving field and advises this guidance may be updated periodically to reflect changes in the regulatory landscape as it applies to RWE utilisation.
Sources of RWD
Registries
Device registries are systematic data collections of medical outcomes following use of medical devices.
They play a unique and important role in medical device surveillance providing impartial de-identified data about procedures and devices not regularly collected by other means.
They can also give valuable information on device performance and safety, including clinical or functional outcomes and quality of life, as well as performance of related treatments for comparison.
Examples of Australian medical device registries include:
- Australian Orthopaedic Association National Joint Replacement Registry
- Australian Breast Device Registry
- Australasian Pelvic Floor Procedures Registry
- Victorian Cardiac Outcomes Registry
- Dental Implant Registry
- Australian Spine Registry
Overseas device registries can also provide useful information on devices intended for market approval in Australia.
These include national joint replacements registries in the UK, NZ, Europe and Scandinavia and the transcatheter aortic valve registry in the United States.
Some medical device companies and healthcare institutions operate their own registries that capture performance and safety outcomes.
Also utilised in device regulatory submissions and surveillance are product, clinical, disease, quality, population and procedure registries that may provide useful sources of RWD.
The Australian Commission on Safety and Quality in Health Care provide a Framework for Australian clinical quality registries.
They also developed the Australian Register of Clinical Registries, which lists the purpose and organisation of clinical registries at all stages of development.
It is important to note the relative strengths and weaknesses of registries including differences in:
- maturity,
- coverage,
- representativeness,
- operational aspects,
- organisational structure,
- data governance (government versus industry funded),
- type of data collected (e.g. demographics, comorbidities, patient-reported outcomes, adverse events, revisions including reasons) and how it is collected,
- follow up duration and frequency, and the proportion of total patients captured (voluntary vs mandatory).
The RWD being used in the registry must be generalisable to the intended population of the device.
The completeness of a registry in terms of proportion of treated patients captured is a key factor in its reliability and overall utility in regulatory decision making.
However, even if incomplete or only capturing a subset of the patient population, the data could still have some utility in supplementing evidence for performance and safety in a particular cohort.
Some registries also incorporate patient reported outcomes measures (PROMs) that demonstrate the patient experience.
Electronic health records (EHRs)
Healthcare databases, including EHRs, are systems used by healthcare practitioners to record routine clinical and laboratory data during their day-to-day practice.[6]
EHRs contain details of patient clinical presentations, investigations, diagnoses and management and are generally stored online in various secure databases.
In Australia, there are multiple databases in use and not every hospital or general practice utilises the same database.
Additionally, a minority of hospitals and general practices may still operate with handwritten medical notes.
Treating clinicians are responsible for documenting in the medical records for a patient encounter and the quality and completeness of this documentation will naturally vary.
Examples of EHR data sources include clinical letters (e.g., referral letters), discharge summaries, diagnostic test results (blood tests, imaging tests) procedures undertaken, medication prescribed, adverse events and vital signs.
Administrative claims data
In Australia, Medicare is the Commonwealth-funded health insurance scheme that provides free subsidised health care services to the Australian population.
The Medicare Benefits Schedule (MBS) is a key component of the Medicare system.
It lists a range of professional services, and allocates a unique item number to each service, along with a description of the service (the ‘descriptor’).
In broad terms, the types of services on the MBS include consultation and procedural/therapeutic (including surgical) services, as well as diagnostic services.[7]
When a patient presents to a clinician with a health concern, any investigations, diagnoses and treatment (e.g., procedures) will be recorded in the form of an MBS item number.
This is the case for patients presenting to a general practitioner, specialist outpatient or in hospital.
For instance, a patient undergoing an endovascular abdominal aneurysm repair will be captured under item numbers 33116 or 33119.
MBS item numbers can therefore be utilised as a source of RWD.
Private health insurance companies also hold records of claims lodged by individuals covered under a private health insurance scheme that may contain details of diagnoses and treatment.
Participation in private health insurance is voluntary; however, Medicare covers most of the Australian population.
A known pitfall of claim data is fraud, in addition to some of the common data characteristics of RWD, such as upcoding.[8]
Device-generated data
Device-generated data can include data generated by medical devices including but not limited to devices worn by or implanted in the patient and physiological monitoring devices.
Information automatically collected and stored by implanted devices can also be a source of RWD.
For instance, some companies hold databases containing information on heart rhythm, device monitoring and treatment delivered in patients with pacemakers/ICDs.
In addition to pacemakers and ICDs there are also remote non-invasive heart monitoring devices (e.g., Holter monitor) that record data on patient heart rhythm and produce reports for healthcare professionals to aid in clinical decision-making.
Patient generated data
Patient generated data is created, reported, or gathered by patients including in-home use settings (e.g., data from digital health technologies such as wearables).[9]
Digital health technology means a system that uses computing platforms, connectivity, software, and/or sensors for health care and related uses.
These technologies span a wide range of uses, from applications in general wellness to applications as a medical device.
They include technologies intended for use as a medical product, in a medical product, or as an adjunct to other medical products (devices, drugs, and biologics).
They may also be used to develop or study medical products.[10]
Patient generated data includes patient-generated health data (PGHD), which are health related data created, recorded, or gathered by or from patients (or other family members or other caregivers) to help address a health concern.
PGHD include but are not limited to:
- Health history
- Treatment history
- Symptoms
- Biometric data
- Patient reported outcome measures (PROMs)
Examples of PGHD include blood glucose monitoring or blood pressure readings using home health equipment, or exercise and diet tracking using a mobile app or wearable device/fitness tracker.[11]
A potential pitfall of data gathered from mobile devices or wearables is selection bias, as not all patients possess these devices.
Other clinical experience data
Published observational and epidemiological studies
Non-interventional studies are known as ‘observational’ as the intervention is not allocated according to an investigational study protocol and is instead prescribed during routine clinical practice i.e., according to current clinical practice guidelines and the best judgement of the physician.
These studies may utilise a RWD source such as EHRs or involve direct collection of patient data for research purposes such as through a case report form.
Patient exposure and outcomes captured in routinely collected data sources are collated and analysed utilising a study methodology such as a cohort, case-control, cross-sectional or case series design.
They may also include patient reported measures.
An example is a retrospective case series at a single hospital examining the safety profile of an acellular dermal matrix used in breast reconstructive surgery by analysing the medical records of patients treated with the device to determine post-surgical complications within a certain timeframe.
In this scenario, the study is limited as it can only analyse the information that has already been collected, which may be incomplete and lacking key endpoints for an analysis given the information was not originally collected for such a purpose.[12]
Post-market data
Post-market data may be collected by manufacturers, sponsors, regulatory agencies, registries, or others following the registration and supply of a device in the market.
Examples of post-market data include post-market clinical follow up (PMCF) studies, which may be retrospective or prospective, single-arm studies nested in a registry, the routine collection and analysis of sales, complaints and adverse events for devices in the market and any market actions such as recalls or product corrections, including that of different geographical regions, and for comparator or other similar products.
For example, a company’s internal records of the types and rates of adverse events reported on a medical device, stratified by year and geographical jurisdiction.
Companies rely on end-users voluntarily reporting complaints and adverse events and this passive method of data collection can result in under-estimation of adverse event rates in post-market surveillance.
See TGA Clinical Evidence Guidelines.
TGA position on the use of RWD and RWE
The TGA acknowledges the benefits of RWE and routinely examines RWE during assessment of premarket applications and post-market surveillance and monitoring of devices.
It is expected that some RWE is supplied in a premarket application for inclusion if the device is already available in other markets.
Once included in the Australian Register of Therapeutic Goods (ARTG), sponsors of devices have an ongoing requirement to generate evidence to demonstrate an adequate level of safety and performance throughout the device’s lifecycle.
Manufacturers should closely monitor emerging RWD and RWE relevant to their device.
Both favourable and unfavourable RWE should be collated, assessed and responded to in line with routine post market surveillance activities.
Randomised controlled trials (RCTs) remain the gold standard for demonstrating safety and performance of medical devices for the purposes of regulatory approval.
In RCTs, the control and treatment arms are chosen based on similar characteristics differing only by the selected intervention.
This increases confidence in ascertaining a cause-effect relationship between the device and the outcomes.
Bias and confounding are minimised as data is collected in a controlled environment with procedures in place to ensure the accuracy of information collection and analysis.
Loss to follow-up and the potential for missing data are minimised as patients are carefully followed up by study investigators to ensure the results are as complete as possible.
There are some disadvantages to RCTs as it applies to demonstrating the safety and performance of medical devices.
One of the main criticisms is the very specific inclusion and exclusion criteria, which mean the results of an RCT are not always generalisable to the population as key patient subgroups are typically excluded (e.g., patients with concomitant illness, the elderly, pregnant women, children, women of childbearing age).
The follow-up in an RCT may not continue for a clinically important length of time that reflects the typical period of usage of a device (e.g., implanted devices intended to be in-situ for life).
Operators in an RCT are usually highly trained and carefully selected such that the outcomes achieved in the trial are likely to be better than in clinical practice when any clinician can access the device and where there may be a learning curve.
Finally, the statistical power of an RCT is often not sufficient to detect rare adverse events or fully characterise the safety profile of a device over an extended period.
Therefore, RWE and RWD are valuable sources of information in the medical device lifecycle and can provide additional information that would be difficult to capture in clinical trials.
This includes the ability to generate data in traditionally excluded populations and a larger sample size to detect rare adverse events.
RWD sources may also capture routine use of devices in practice that are not covered by the approved intended purpose.
Hence, RWE can provide information on safety and performance of a device in the ‘real-world’, outside the heavily controlled environment of a traditional clinical trial.
This information can inform regulatory decisions, clinical care and improve overall patient outcomes.
However, some of the limitations in incorporating RWE and RWD into regulatory applications include data quality and bias.
The TGA’s expectation of RWE varies as different levels of evidence are required for each device.
It is therefore not possible to provide an exhaustive list of RWD and RWE accepted by the TGA, as each device and its accompanying clinical evidence is assessed on a case-by-case basis.
The TGA remains the final arbiter in determining the adequacy of the totality of clinical evidence including RWE in substantiating the intended use of a medical device.
The TGA utilises a similar framework to the FDA in assessing the quality of RWD with respect to its relevance and reliability.[13]
Of equal importance to assessing the underlying quality of RWD is the methodological approach taken to analyse the data (e.g., study design and/or statistical methods).
Types of study design that may be used to analyse RWD sources for regulatory purposes include but are not limited to randomised trials such as pragmatic clinical trials, single-arm trials with an external control and prospective or retrospective observational studies.
These studies rely to a varying extent on RWD; pragmatic clinical trials for example may utilise RWD to identify patients fitting the inclusion criteria to facilitate randomisation to intervention/control (especially in rare diseases or situations where it has been challenging to recruit patients for a trial), or to collect some or all the performance and safety outcomes.
The TGA will assess whether the RWD study protocol and analysis plan proposed by the sponsor is appropriate.
Important aspects of real-world evidence to consider include the type of evidence it can provide.
It can reflect a scale of outcome, a time in point of clinical practice, or the impact of a product on the healthcare system.
However, the context in which the data is viewed and analysed as well as potential confounding details can impact on the quality of the evidence.
A critical evaluation is then undertaken to provide supportive clinical evidence for a therapeutic good.
The quality of the data assessment is important to the applicability of outcomes, for example:
- percentage of data collected per subject
- range, median
- degree of missing data resulting in exclusion and number of subjects affected
- useability, health literacy factors
- calibration aspects if applicable
- whether data curation has taken place
- sources of RWD may be from different kinds of devices or sources which means it is difficult to seamlessly stitch them together. This can significantly change the quality and outcomes.
Regulatory settings where RWE can be used
RWE may be used in applications for premarket approval for a new device (applications for inclusion and conformity assessment) and as part of ongoing post-market surveillance of included devices.
Manufacturers are required to justify their usage of RWD and RWE, demonstrating that it is robust and fit-for-purpose.
The TGA then assesses this as the regulatory body.
The following is a list of some of the potential regulatory applications of RWE in the Australian context.
Pre-market
RWE may be used to support or supplement traditional clinical investigation data for the purposes of obtaining market approval.
For example, RWE may be used to show safety and performance in patients not captured by the traditional clinical trial (which are usually evaluating carefully selected patients that may not be representative of how the device will be utilised in clinical practice).
This is the most common situation, however, in select cases, RWE may be used as the pivotal clinical evidence for a new device if the RWE is deemed to be sufficiently robust and the device is not novel.
RWE may also be used to vary the intended purpose for devices to include an expanded patient population (e.g., children), additional clinical indications, or additional delivery methods.
In this instance, RWE is most likely to be regarded as acceptable evidence in the setting of well-established technologies such as joint prostheses, transcatheter aortic valves or abdominal mesh.
RWE may be used to provide direct clinical evidence to support the safety and performance of an iterative improvement of an already included device family.
RWE may be used to obtain a control arm for comparison with a novel device being evaluated through a single-arm prospective study, for example, through an RWE study embedded in an established clinical registry with the registry providing independent oversight of outcome assessment and statistical analysis.
Lastly, post-market data (i.e., sales, complaints and adverse events) from overseas jurisdictions, are routinely considered as part of the clinical evidence for a device.
The analysis of this information, including trends in adverse event rates over time and in comparison, to other geographical jurisdictions, may provide additional evidence for the safety profile of a device.
Post-market
To comply with regulatory requirements, device manufacturers should be generating and reviewing RWE that continues to support an adequate level of safety and performance after the device is included in the ARTG.
This includes keeping records of complaints and adverse events, reporting adverse events to the TGA, investigating adverse events, and initiating action to mitigate risks to exposed patients (e.g., recall action) where indicated.
Hence the collection of RWE forms an integral component of post-market surveillance.
Consistent with evidence requirements across the lifecycle of the device, RWE should be used to continue to monitor emerging risks.
The manufacturer’s analysis of RWE is a critical step in maintaining currency of QMS documentation.
This is especially relevant for keeping the clinical evaluation report and assessment of risk/benefit of the device, along with relevant QMS documentation such as risk assessments up to date.
RWE may be utilised to provide further information on long-term safety and performance or answer questions about safety and performance in particular patient cohorts, for example in the form of a PMCF or nested registry study.
This is particularly relevant for permanently implantable devices that may not have longer term follow-up data at the time of premarket approval.
These studies are frequently planned or in progress at the premarket assessment stage and the TGA may require the final results to be provided upon completion as a condition of inclusion.
Additionally, these studies may be imposed or required by the TGA at the time of premarket approval.
In relation to patient generated data or user feedback, rather than rely on incoming reports, manufacturers should in parallel implement practices in actively collecting, seeking and reviewing patient generated data, user experience and feedback, including from a variety of sources.
The TGA routinely monitors sources of RWE for emergence of important safety signals and may initiate an investigation to determine if there have been changes to a device (or group of devices) safety profile in the Australian population.
RWE in this scenario can potentially be utilised to support regulatory actions such as removal from the market if a device is found to have a higher rate of adverse health outcomes that shifts the overall benefit-risk profile.
Other regulatory actions include changes to device labelling or instructions and communications to end-users.
Finally, RWE may be provided to the TGA by sponsors to support an application for reclassification.
Software as a Medical Device (SaMD) and RWE
Using RWE to support SaMD applications
Real world evidence from software has the potential to augment clinical trials and complement traditional clinical evidence.
A key question is when to use synthetic data or real-world evidence, or when a hybrid approach is optimal.
The use of RWE is not a suitable substitute for testing the functionality and other aspects of the product (such as resilience, error handling, fault tolerance, or performance). Technical and analytical validation of software is still required to demonstrate safety and accuracy.
The following factors should be considered when using an RWE approach for validating a SaMD product:
- type of data and whether it is augmenting other data components
- source of data, including whether privacy and consent obligations have been met
- structured vs unstructured data
- gaps or mismatches in data
- curation or cleansing of data that may have changed the source RWE or introduced new errors
- fit of the data to the scope of the software – whether the RWE being generated covering all the use cases of the software product, and what sort of a traceability analysis been done to confirm this.
For AI products, the data must also be representative of the target population on which the product is to be deployed.
So, for example, if the RWE is generated on a different demographic or geographical region than the target population, justification must be provided, and a description of any actions taken to account for this difference.
Although validation evidence sourced from RWE may be available in high volumes, it may sometimes be incomplete or partial, biased, contain gaps or represent only the most exercised use cases or functions.
Sometimes, RWE generated on different models of the same kind of hardware or software may lead to different results from the SaMD.
This may make it difficult to demonstrate reproducibility of the results generated using real world evidence.
Confidence in the accuracy of the real-world evidence is paramount, and, in the case of Artificial Intelligence, this means transparency of data generation, sources, labelling and characteristics.
Real world data sourced from software
Real world evidence from software has the potential to augment clinical trials and complement traditional clinical evidence.
Data may be sourced from applications, sensors, IVDs, EHRs, general practice software, telehealth, or other sources such as consumer fitness devices or commercial data vendors.
Large data sets can be sourced in this way; however, they are often difficult to join accurately with data from other sources due to a lack of consistency in standardising data items.
Existing obligations under the medical device regulatory framework, including the Essential Principles, still apply for software-based medical devices for safety and performance, and in addition there are software specific requirements such as cyber security and version control – the software-based medical devices webpage has details about these obligations and links to guidance.
Specific real world data issues that need to be considered by sponsors and manufacturers in sourcing, storing, and using the data:
- Data governance framework and how the data is curated and managed on an ongoing basis
- Sourcing, collection and how the data is used
- Feeds into analytics
- Storage - on devices, in the cloud, which jurisdictions, management of copies – for example on intermediate servers
- Third party access - Application Programming Interfaces (APIs), on-selling of data by third parties
- Security
- End of life – disposal, retirement, porting
- Other legal obligations such as privacy and consent
- post-market use of real-world evidence for software
In a post-market context, real world evidence may be used to:
- Give further external validation of devices to support generalisability of model performance, i.e., ability to perform in a new use environment or a new sample of patients.
- Assist in expansion of indications or claims for a product that is already in the market, provided that the study design is appropriate to the claims.
- Track end points beyond model accuracy, for example mortality, rate of Intensive Care Unit admission or other patient outcomes.
Variability of data output from different devices, across different uses on the same or similar patients or conditions, and by the same or different operators, may make the data difficult to interpret or to check the accuracy.
This may impair the ability to use the data as evidence.
Key considerations and limitations for analysis
When utilising RWD in an interventional or observational study to answer a regulatory question there are several considerations.
The following questions can guide the use of RWE in regulatory decision making and highlight some of the potential concerns when used to demonstrate safety and performance of a medical device.
These include:
- The type of RWD available for analysis, whether RWE is an appropriate source of clinical evidence for the device and regulatory question, or whether a traditional clinical trial should be performed.
Given that some RWE is collected for non-regulatory purposes, its accuracy and reliability may be limited.
This is contrasted to a clinical trial, in which the operating environment is highly controlled and contains safeguards to optimise the quality and completeness of data.
As RWE is limited to the data that is available, there may be limitations on the evidence that can be generated to demonstrate safety and performance.
The usual direction in which clinical trials operate is to first formulate the research question, before gathering data to inform that research question.
This ensures correct and relevant information is collected. RWE studies generally operate in reverse, first considering the data that is already available (i.e., RWD) before determining a research question.
These studies may therefore be inherently confined to answering narrow research questions that are amenable to use of available data.
Data quality is also a concern as this may limit the analysis for the purposes of regulatory decision making.
Key considerations as to whether RWE is an appropriate source of clinical evidence are the level of risk posed by the device to the patient and the nature of the scientific question the study seeks to answer.
It may be easier to utilise RWE when there is a single clearly defined outcome being evaluated rather than for characterising the entire safety and performance profile of a relatively unknown technology.
If the RWD source is based on overseas systems (e.g., US Medicare/Medicaid or overseas registries), the sponsor will need to provide some background information, for example but not limited to population demographics, data accrual and quality control for that RWD source as the TGA may not be familiar with the inherent strengths and limitations of the dataset.
- Consider if the study design is sufficient to answer the regulatory question – i.e., whether it can demonstrate the device is safe and effective in the intended patient population and whether there is a comparator group that consists of the typical standard of care.
When analysing RWD, study design is a key consideration given some of the limitations with RWE, particularly the use of retrospective or previously collected data.
If the selected RWD cannot appropriately answer clinical questions or provide sufficient evidence for regulatory decision making, then a prospective clinical trial should be seriously considered.
As the study methodologies employed to analyse RWD are based on essentially the same considerations as for traditional clinical research, the standard evidence hierarchy indicates the appropriateness of different study designs to answer questions.
For treatment decisions, the RCT provides the highest quality evidence, followed by prospective and then retrospective study designs.
To allow for a meaningful comparison, a control group is required to demonstrate superior (or at the least, comparable) performance and safety compared to the current standard of care for patients with a particular condition.
This could be a concurrent control group or an external/historical control group depending on the device.
An example of this can include nested clinical trials, where data on the control group could use RWE collected through the registry.
- Consider the rationale for the selection of a particular outcome and its clinical relevance. The selected outcome should provide sufficient assurance of device safety and performance in a regulatory setting.
Selected performance and safety outcomes should be clinically relevant and reflect meaningful improvements in mortality, morbidity or quality of life including functional and symptomatic improvement following treatment with a particular device.
While surrogate markers (for instance, radio stereometric analysis as a surrogate marker for joint prosthesis loosening, pacing capture threshold or sensing amplitude for pacemakers) may be used in limited circumstances, their use must be justified and validated to meaningful clinical outcomes to be useful in regulatory decision making.
If such outcomes are not available, a specifically targeted prospective trial may need to be considered to accurately capture this information.
Sponsors should consider whether the selected performance and safety outcomes will be adequately captured by the RWD source.
For example, if conducting an RWE observational study using electronic health records to identify patients implanted with an orthopaedic device and evaluate the rate of revision surgery due to device fracture, the sponsor should consider whether all hospitals in the country are covered by the RWD source.
A patient may have the initial device implantation procedure at one hospital but present for a revision procedure at a different hospital not covered by the RWD source.
This would result in underestimation of the rate of device revision.
Similarly, if conducting an RWE study evaluating the mid-long term safety profile of a supportive mesh device utilised in breast reconstructive surgery, many of the adverse events associated with these devices (e.g., infection, swelling, seroma, pain) would not result in emergency presentation but instead be evaluated by general practitioners or specialists in the outpatient setting.
If the RWD source is electronic health records obtained from hospitals only, this will miss a significant proportion of adverse events diagnosed and treated in the community.
- Consider if the collection of data was prospective (primary data collection) or retrospective (secondary use of data). The study protocol and statistical analysis should be pre specified.
Prospective data collection is preferred as this represents primary data collection for the purpose of research, is more likely to be planned, structured, adhering to a defined follow up schedule, complete and relevant to the research question and is hence regarded as superior quality to retrospective data, which is reliant upon existing data collected for different purposes and may be inconsistent and incomplete.
If a retrospective analysis is being completed on data that has already been collected, clinically important outcomes may not be available.
When analysing RWE, prespecified study protocols and statistical analyses are essential to clearly state clinical outcomes and to ensure meaningful results are reported.
Additionally, the risk of making multiple comparisons (e.g. new endpoints, new analyses, subgroup analyses, covariate adjustments) and selectively publishing significant results is increased in retrospective data studies, where the research question can be more easily changed, or a secondary question elevated to the primary one.[14]
Therefore, companies are strongly encouraged to develop and publish a study protocol and statistical analysis plan before accessing, retrieving and analysing RWD as this creates added reassurance of study validity, especially in the setting of retrospective data collection.
Any post-hoc analyses should be recognised as exploratory and “hypothesis generating” only and cannot be used to substantiate regulatory decision making.
If RWE is utilised in an application, a pre-submission meeting to discuss study designs and the analysis plan will ensure that protocols are followed.
A clear direction of research prior to the commencement of the study also aligns with the principles of good clinical practice.
- Consider if the RWD has been obtained in line with Good Clinical Practice guidelines. Outline the research or clinical governance oversight that was present throughout the collection and analysis of RWD.
All research that is conducted and utilises real world evidence should be completed in accordance with good clinical practice guidelines, ISO 14155:2020 Clinical investigation of medical devices for human subjects and the Declaration of Helsinki.
A short statement as part of the application detailing how the collection of RWE complies with these principles will be sufficient.
As stated in the TGA Clinical Evidence Guidelines, “If clinical investigation data is collected outside Australia, the investigation must have been conducted in accordance with the principles of the Declaration of Helsinki, as in force at the time and place where the investigation was conducted.”
The TGA recommends, in keeping with ISO 14155:2020 and GCP that manufacturers and sponsors remain up to date with current best practices and literature regarding elements of data assessment.
- Address bias, missing data and confounding as a limitation.
All RWD and RWE will have inherent bias that is applicable to the studied population. Of concern is selection bias, which refers to systematic differences between baseline characteristics of the groups that are compared.[15]
Randomisation, when executed successfully, prevents selection bias in allocating interventions to participants.
In an RWE study randomisation may not be feasible and consideration should be given to ensuring the treatment and intervention groups are as balanced as possible, utilising statistical techniques where necessary.
Selection bias may occur if the RWD source only captures a small portion of patients exposed to the device (e.g., a voluntary device registry in contrast to opt-out registry).
This patient cohort may not be representative of the types of patients expected to receive the device in clinical practice based on the manufacturer’s intended purpose.
This could bias the results of the study yielding a more favourable result for the intervention if the patients included in the study were more likely to achieve positive outcomes due to their underlying characteristics.
These factors may also negatively affect the generalisability of the study.
For example, a company is utilising overseas joint registry data to demonstrate the safety and performance of a new hip joint replacement.
The overseas registry has around 50% national coverage in terms of the proportion of patients undergoing primary joint replacements and revisions, with the remainder of the patient outcomes unknown.
Although the available outcomes on the included patients are favourable demonstrating comparable revision rates to the industry standard for hips, the results are not completely reliable because of the low coverage of the registry and concern that missing data from additional patients may reflect different, less favourable outcomes, hence this data source would not be accepted for the purposes of basing a premarket approval.
However, this data could be useful for informing certain trends and should be included within an application for completeness.
Detection bias
Detection bias refers to systematic differences between groups in how outcomes are determined.[16]
Blinding of patients, clinicians and outcome assessors is one way to prevent the introduction of bias into the measurement of outcomes, as it will reduce the risk that knowledge of which intervention was received, rather than the intervention itself, affects outcome measurement.
This is particularly important for assessment of subjective outcomes such as pain scores.[17]
Blinding of the patient is often not possible in medical device trials unless a sham device/procedure is utilised.
However, the outcome assessor can be blinded.
Blinding will, in most circumstances, not be possible in an RWE study and as a result all relevant parties will have an awareness of the treatment assigned.
The impact of lack of blinding on the results should be considered in terms of the direction and magnitude of potential bias – this will depend to some extent on whether the performance and safety outcomes are objectively determined.
Lead time bias
Lead time bias is caused by a distortion overestimating the apparent time surviving with a disease caused by bringing forward the time of its diagnosis.
This is often seen in screening programs where an earlier detection and treatment of disease can lead to a greater chance of cure or longer survival.
However, as screening detects disease in patients earlier before signs and symptoms appear, estimates of differences in survival time will be biased as survival time will appear to be longer in screened patients if early detection has no effect on the disease course.
This can be a concern in some RWE as ‘time zero’ may be unknown, thereby distorting the survival time analysis and favouring the intervention.
Further, the RWD source may contain missing or incomplete data, particularly if it is not mandatory to collect certain patient outcomes, or if there is variability in the conduct and timing of clinical assessment.
Missing data can restrict the extent of data analysis that is possible and decrease the statistical validity of key study results.
The RWD source may not contain sufficient information to capture important safety and performance endpoints and may not be able to accurately identify the exact device under assessment (e.g., make, model, UDI, etc.) to enable reliable conclusions to be drawn about a particular device’s performance and safety.
This lack of data may also not be at random which will limit the interpretation of results, particularly if a subgroup is disproportionately affected.
For example, there may be missing information on patient demographic characteristics (e.g., age, gender, race, ethnicity) limiting the applicability of the results to these subgroups of the intended patient population.
This has important implications for health equity as the performance and safety profile of medical devices can vary depending on the population subgroup,[18] leading to different (potentially poorer) health outcomes for people with the same health condition.
A recent example is the observation that pulse oximeters overestimate oxygen saturation in individuals with darker skin pigmentation, meaning low oxygen levels may be misrepresented potentially changing treatment decisions resulting in adverse outcomes. [19]
Adequate representation of population subgroups is required in RWE research so that the outcomes of usage of a device, whether favourable or unfavourable, are properly understood and ongoing use of the device does not contribute to health inequities.
Sponsors should consider the epidemiology of the health condition/s treated by their medical device in the general population, including the proportions of types of patients affected by the condition and ensure the RWE covers these groups of patients adequately.
For example, if 50% of the population affected by aortic stenosis in Australia is comprised of females, then the RWE should ensure that a specific statistically powered analysis is conducted to ascertain the performance and safety profile of the device (e.g., a transcatheter aortic valve) in females.
Sponsors should document missing data and the approaches used to address it in their submission to the TGA.
It may be necessary to utilise a larger dataset and/or conduct sensitivity analyses to minimise the impact of missing data.
Confounding
Confounding is defined as a situation in which a non-causal association between a given exposure or treatment and an outcome is observed because of the influence of a third variable designated as a confounder.
The confounding variable needs to be related to both the treatment and the outcome under study.
Confounding is distinct from bias because this association, while not causal, is real.[20]
RWD sources may not contain sufficient information to identify potential confounders such as patient comorbidities, smoking status or baseline demographics, or operator experience and training that can have an impact on device performance and safety.
The impact of these factors may lead to an over or under estimation of treatment effect.
While statistical techniques including restriction, matching or randomisation are available to control confounding, their use may be limited.
Successful randomisation ensures that the treatment and intervention group are balanced in terms of known and unknown confounders.
In the absence of randomisation in RWE studies, the propensity score and associated techniques such as propensity score matching are sometimes used as a means for adjusting for potential confounders.
The problem with these techniques is that they can only adjust for known confounders and may miss other potential confounders.
RWE studies conducted by manufacturers or sponsors, or those who have received funding or support from manufacturers or sponsors, will be considered on their merits.
Peer reviewed articles should clearly identify any conflicts of interest (actual or perceived).
It is accepted that certain studies require support from manufacturers (such as large-scale pre-market approval studies) or will be conducted by manufacturers (such as PMCF studies).
A discussion of the extent of involvement of manufacturers or sponsors should form part of the study report and the critical analysis contained in the regulatory submission.[21]
TGA approach to assessment of RWE
A clear rationale for presenting RWE in a regulatory submission is required, including a justification for the applicability of RWE to the regulatory question and an explanation of how potential limitations have been minimised.
The TGA will assess RWD/RWE with respect to its fitness for purpose – this will include an assessment of the quality (relevance and reliability) of the data and the appropriateness of the proposed analytical techniques.
The TGA will assess the sponsor’s process for analysing the RWD, including the study design and statistical analysis plan for its ability to answer the specific regulatory question.
RWE is assessed as for traditional clinical trials with consideration to key criteria including the representativeness of the patient population compared to the intended patient population, potential sources of bias and confounding and control of these factors, whether the analysis was pre-specified, selection of outcome measures (whether they are surrogate measures and/or clinically correlated measures) and data quality (whether prospective or retrospective collection).
Novel devices are less likely to be able to rely solely on RWE than well-established technologies with known safety profiles.
Overall, TGA’s expectation that the level of clinical evidence substantiating a device application should be proportionate to the use and classification of the device remains unchanged.
With respect to the information to be provided to the TGA in a submission including RWE, the Clinical Evidence Guidelines chapter on "The Clinical Evaluation Report (CER)” includes sections on “Summary of the clinical data and appraisal” and “Data analysis” which cover clinical experience data among other sources.
Additionally, the Clinical Evidence Guidelines include a “Clinical evidence checklist” outlining the types of information to be provided in the submission.
The list below details some examples of situations in which RWE may be used:
Premarket
Example 1
Company A makes Device B, a transcatheter aortic valve that is included on the ARTG and approved for use in patients with severe aortic stenosis who are judged to be suitable for TAVI by a cardiologist.
The approved method of valve delivery is through the femoral artery (transfemoral approach) however Company A would like to obtain approval for valve delivery via the subclavian artery to cater for patients who have peripheral vascular disease. Instead of completing a randomised controlled trial to assess the safety and performance of the subclavian approach, the sponsor utilises data extracted from a nation-wide compulsory registry that tracks the outcomes for all patients implanted with Device B, using all approaches.
This data demonstrated an adequate safety and performance profile and was successfully utilised to support expanding the indication to include valve delivery via the subclavian approach.
Example 2
Company X makes Device Z, a mechanical circulatory support device that is slightly different in design to Device Y, a mechanical circulatory support device already included on the ARTG and made by Company X.
Company X demonstrates to the TGA in the submission that Device Z and Device Y are substantially equivalent. Indirect clinical evidence is provided for Device Y in the form of a randomised controlled trial, however as no clinical trial has been performed for Device Z, Company X instead provides 1-year data from a statistically significant number of patients treated in a different country and captured through a nationwide mechanical circulatory support device registry. The data captures relevant performance and safety endpoints for the purposes of demonstrating compliance with the Essential Principles and overall similar outcomes to Device Y.
In this case, RWE in the form of registry data is used to provide direct clinical evidence for a new iteration of an already included device.
Example 3
Company H makes Device J, a new hip joint replacement with changes considered by the TGA to be small, compared to a previous iteration from the same company that is already included on the ARTG.
This hip joint replacement has been used in another country with a well-established joint replacement registry and has accumulated data on procedures and revisions, indicating a revision rate comparable to other total conventional hip prostheses in that registry.
The company also provides the TGA with data from a second registry in a different country, providing further support for a revision rate in line with the national average.
In this scenario, the quality of RWE from the joint registry data was able to form pivotal clinical evidence that gave the TGA assurance to support a premarket application.
This is accepted if the medical device being considered is not a novel device and has an established history of use and well-understood safety profile.
Post-market
Example 1
Device M is a supportive mesh device utilised in breast reconstructive surgery however is subject to reclassification and must supply the TGA with additional evidence for safety and performance commensurate with a higher risk classification.
An Australian registry has been capturing patients implanted with Device M for 5 years and recording key safety and performance outcomes such as complications and revisions. The registry contains in-built comparator groups consisting of patients implanted with similar devices to Device M and without a mesh device.
The sponsor contacts the Australian registry to request access to the data on Device M and utilises this RWE as part of its application to up classify and remain in the ARTG.
Example 2
The TGA monitors the AOANJRR annual report to identify joint replacements being utilised in Australian patients that exhibit higher than anticipated revision rates.
Revision rates are the key performance and safety indicator for joint replacements as they represent an unambiguous objective endpoint. Investigations are initiated in conjunction with device companies to determine whether regulatory action is required to mitigate risks to public health and safety.
In this scenario, RWE is used in public health surveillance to identify potential safety signals for further investigation.
Example 3
The TGA is conducting a post-market review into a class of implantable medical device for which a safety signal was identified in a peer-reviewed publication.
The review will assess the clinical evidence for each device in the ARTG to verify ongoing compliance with the Essential Principles. Company J makes device C and has maintained a post-market prospective voluntary registry gathering key performance and safety outcomes on a statistically significant number of patients implanted with the device in Australia since market approval and following them up for 5 years post-operatively.
In this example, the RWE study provided by, Company J was assessed by the TGA to be of a high quality and that the rates of adverse events for Device C remained well within the accepted industry standard for this kind of device. In this case, RWE was accepted by the TGA as evidence of compliance with the Essential Principles.
Examples of RWE that would not be accepted
Novel device
A first-in-class implantable device is approved in an overseas country and approval is being sought in Australia utilising a retrospective case series (RWE study) that identified patients treated with the device through analysis of medical records at a single hospital and examined the notes to identify performance and safety outcomes.
The key performance outcomes are subjective and the nature of the condition the implantable device is treating is that patients are likely to be taking effective co-treatments alongside the new device.
There are many issues with this approach, for instance the lack of a comparator arm prevents direct assessment of safety and performance against the standard of care and introduces bias and confounding in the assessment of outcomes.
The inherent disadvantages of retrospective data collection mean it is not sufficiently reliable to support an approval.
The RWE study alone will not provide satisfactory assurance of safety and performance for the purposes of premarket approval.
A prospective clinical trial with an appropriate comparator arm is required to minimise the risk of bias and confounding in assessment of safety and performance outcomes.
Non-representative patient population
An observational RWE study is conducted utilising a RWD source (government funded health insurance scheme data) that includes only patients over the age of 65 who are eligible for insurance coverage, however the device under evaluation is also intended to be utilised in patients under the age of 65, hence the outcomes from the RWE study - while successfully demonstrating safety and performance in the chosen population - cannot be extrapolated to the broader intended patient population and regulatory approval cannot be granted unless additional data on the excluded patient group is provided.
Use of surrogate markers
An observational RWE study is conducted to assess the performance and safety of a new pacemaker/ICD device utilising an overseas company-owned database of patients that remotely monitors and records the activity and treatment delivered.
No clinical outcomes are collected (e.g., mortality, hospital presentations, etc.) but the study collects data on the pacing threshold/capture success of the device as a surrogate outcome and presents this as evidence of compliance with the Essential Principles.
While the data is helpful to show one technical aspect of the performance of the device in pacing appropriately, this is inadequate in demonstrating patient benefits without clinical correlation.
In this scenario, because the surrogate markers do not adequately represent clinical outcomes directly meaningful to the patient such as survival, function, or symptomatic improvement that can be used to judge the safety and performance of the device, RWE will not be sufficient.
[1] U.S. Food & Drug Administration. Real-world evidence. U.S. Department of Health and Human Services. Accessed 26/3/2024 at https://www.fda.gov/science-research/science-and-research-special-topics/real-world-evidence
[2] Ibid
[3] Therapeutic Goods Administration. Real world evidence and patient reported outcomes in the regulatory context. Australian Government, Department of Health and Aged Care, Therapeutic Goods Administration. Accessed 26/3/2024 at https://www.tga.gov.au/sites/default/files/real-world-evidence-and-patient-reported-outcomes-in-the-regulatory-context.pdf
[4] National Institute for Health and Care Wellness. NICE real-world evidence framework. National Institute for Health and Care Wellness. Accessed 26/3/2024 at https://www.nice.org.uk/corporate/ecd9/chapter/introduction-to-real-world-evidence-in-nice-decision-making
[5] Pratt N*, Vajdic CM*, Camacho X, Donnolley N, Pearson S. Optimising the availability and use of real world data and real world evidence to support health technology assessment in Australia. Sydney, Australia: UNSW Sydney, 2023
[6]Dang A. Real-World Evidence: A Primer. Pharmaceut Med. 2023 Jan;37(1):25-36. doi: 10.1007/s40290-022-00456-6.
[7]Medical Services Advisory Committee. What is the MBS and Medicare? Australian Government Department of Health and Aged Care. Accessed 26/3/2024 at http://www.msac.gov.au/internet/msac/publishing.nsf/Content/Factsheet-03
[8] Liu F, Panagiotakos D. Real-world data: a brief review of the methods, applications, challenges and opportunities. BMC Med Res Methodol. 2022 Nov 5;22(1):287. doi: 10.1186/s12874-022-01768-6.
[9] U.S. Food & Drug Administration. Use of Real-World Evidence to Support Regulatory Decision-Making for Medical Devices. U.S. Department of Health and Human Services. Updated 2023. Accessed 26/3/2024 at https://www.fda.gov/media/174819/download
[10] Ibid
[11] HealthIT.gov. Patient-Generated Health Data. The Office of the National Coordinator for Health Information Technology (ONC). Accessed 26/3/2024 at https://www.healthit.gov/topic/scientific-initiatives/pcor/patient-generated-health-data-pghd
[12]Liu F, Panagiotakos D. Real-world data: a brief review of the methods, applications, challenges and opportunities. BMC Med Res Methodol. 2022 Nov 5;22(1):287. doi: 10.1186/s12874-022-01768-6.
[13] U.S. Food & Drug Administration. Use of Real-World Evidence to Support Regulatory Decision-Making for Medical Devices. U.S. Department of Health and Human Services. Updated 2017, accessed 26/3/2024 at https://www.fda.gov/regulatory-information/search-fda-guidance-documents/use-real-world-evidence-support-regulatory-decision-making-medical-devices
[14] Vollert J, Kleykamp BA, Farrar JT, Gilron I, Hohenschurz-Schmidt D, Kerns RD, Mackey S, Markman JD, McDermott MP, Rice ASC, Turk DC, Wasan AD, Dworkin RH. Real-world data and evidence in pain research: a qualitative systematic review of methods in current practice. Pain Rep. 2023 Feb 1;8(2):e1057. doi: 10.1097/PR9.0000000000001057.
[15] Cochrane Training. Chapter 8: Assessing risk of bias in a randomized trial. The Cochrane Collaboration. Accessed 26/4/2024 at https://training.cochrane.org/handbook/current/chapter-08
[18] Kadakia KT, Rathi VK, Ramachandran R, Johnston JL, Ross JS, Dhruva SS. Challenges and solutions to advancing health equity with medical devices. Nat Biotechnol. 2023 May;41(5):607-609. doi: 10.1038/s41587-023-01746-3
[19] Therapeutic Goods Administration. Limitations of pulse oximeters and the effect of skin pigmentation. Australian Government, Department of Health and Aged Care, Therapeutic Goods Administration. Accessed 26/3/2024 at https://www.tga.gov.au/news/safety-updates/limitations-pulse-oximeters-and-effect-skin-pigmentation
[20] U.S. Food & Drug Administration. Use of Real-World Evidence to Support Regulatory Decision-Making for Medical Devices. U.S. Department of Health and Human Services. Updated 2017, accessed 26/3/2024 at https://www.fda.gov/regulatory-information/search-fda-guidance-documents/use-real-world-evidence-support-regulatory-decision-making-medical-devices
[21] Therapeutic Goods Administration. Clinical Evidence Guidelines. Australian Government, Department of Health and Aged Care, Therapeutic Goods Administration. Accessed 26/3/2024 at https://www.tga.gov.au/sites/default/files/clinical-evidence-guidelines-medical-devices.pdf