Tip:
Highlight text to annotate it
X
Hello everyone. Thank you for your patience and welcome to the webinar titled How St.
Jude Medical Manages Oracle Clinical Studies using Accel-Copy presented by Michelle Engler,
who is the director of application development for BioPharm and Ilya Gubernick who is a clinical
software engineer at St. Jude Medical.
I'm Eugene Sefanov, the marketing manager at BioPharm and I'll be going over some housekeeping
items before turning it over to Michelle and Ilya.
During the presentation, all participants will be in listen only mode. However, you
may submit questions to the speakers at any time today by typing them in the chat feature
located on the side of your screen. Please state your questions clearly. Keep in mind
other webinar participants will not see your questions or comments. Nonetheless, your questions
to the speaker will be addressed as time allows towards the end of the presentation.
If you still have unanswered questions after the webinar or would like to request additional
information from BioPharm, please feel free to visit the company's website for contact
information. As a reminder, today's presentation is being recorded and will be posted on BioPharm's
website within 24 hours. We will also be emailing you a link to the recording as well as a link
to the PDF version of the presentation.
This concludes our housekeeping items. I would now like to turn the call over to Michelle
Engler and Ilya Gubernik.
Thank you Eugene. Welcome to the webinar on How St. Jude Medical Manages Oracle Clinical
Studies Using Accel-Copy. I'm Michelle Engler here at BioPharm systems and I'll be presenting
this webinar along with my colleague Ilya Gubernik from St. Jude's Medical.
Okay, let's get started. This webinar will address simple, will talk about an overview
of Accel-Copy, discuss Accel-copy as a business solution, and I'll touch on the using Accel-Copy
in a disconnected environment.
Ilya will then talk to the business process change that was introduced with an Accel - Copy
of St. Jude. He'll also go through some business cases of how they're using the product in
their day-to-day work. And finally, show demonstration of the product in their work environment.
Okay. So, Accel-Copy. Basically, what it is, is an application that can be utilized to
migrate database objects, study objects, globalize very, parts of your study or metadata, these
things, between instances of Oracle Clinical Remote Data Capture.
Typically, what happens is, studies are extracted from production and then imported into secondary
databases. It doesn't have to be configured that way. It can be configured a variety of
ways. But, the typical way is to be extracted info from production and then migrating studies
too, you know, your test environment or development environment for trouble shooting or QA or
training purposes.
Accel-copy allows certain objects to be selected for copy so you could say I want to copy a
full global library by a certain domain. I want to copy just the study design definition.
Possibly, just the patient data. So, it allows some delineation of which types of objects
are being copied between the instances.
It has additional features such as a full data comparing. It also handles study delete
which is useful in the case you are archiving studies out of a, maybe, production environment
into an archival instance. It also provides for disconnected study, copy, which I'll talk
about in the next couple of slides.
So, how can Accel-Copy be used in your business? So, database management is one of the classic
approaches of Accel-Copy. You know, the way to create development, test environments without
Accel-Copy is to do clonings of your databases. So, with Accel-Copy, you can avoid having
to do a full clone, a full copy of your database and actually flex certain items to be migrated.
And this is helpful when you have multiple groups using a particular development environment
or you don't have, you know, you have a DBA resource, these kinds of things. So, for database
management, we can use it too, you know, migrate just select domains and studies. We can also
use it to achieve. For support and training, it can be, we can take a study from production,
for example, and migrate that into a training database and then perform protocol training
in the study in a nonproduction environment.
You can also create a play area for users to go in and try things out or test study
amendments and things like this. You can use Accel-Copy to show updates, you know, your
secondary environment. It also can be used for synchronizing global data such as reference
code lists, your sites and investigators if you wanted to synchronize that across database
instances.
And finally, it can be used as a QA tool. In terms of it a database upgrade, say you're
upgrading your Oracle Clinical as part of your QA checks, you could perform it and extract
certain studies prior to the upgrade and then you hold onto those extractions in performing
upgrades and then to a full data comparison between the old extractions and the new upgraded
data to verify there's no data loss. It can also be used in that mechanism as a QA tool.
Another option is to use Accel-Copy as a disconnected environment. So, the typical scenario for
this is that, you know, maybe you have like a company with lots of studies in the database
and they did a, another company's is going to break off from that primary company and
they want to take their study database with them, but because they're sharing it instance,
it becomes kind of a complicated issue. You can't take the primary company studies with
you. You only want to have your particular data pieces.
So, with Accel-copy, it can be utilized in a way that you don't actually have to install
anything. The source and the target environments. And, the command line, you can actually just
extract the studies in the global library of interest and then import that into a target
instance without ever having to install anything on either instance.
In addition, that command line interface is available to any user of Accel-Copy even if
it's been, so Accel-Copy being installed does full GUI, you still have that command line
interface available. And so through that command line interface, there are some additional
features such as, you know, like refreshing a specific table or continuing a load that
fails for some reason. So, these kind of different ways to access Accel-Copy and to perform these
operations as migrating data between instances are all available through various mechanisms
that tell you how you want to install the application.
So, now what I'll do is pass it on to Ilya who'll talk about St. Jude's Medical and how
this has impacted their business process of Accel-Copy.
Thank you Michelle. So, before Accel-Copy, we had to do like a big database refresh.
A multiple team software practice and very difficult to schedule it and we had to use
the DBA resources to be able to do it. And there was a significant downtime inconvenience
for our ITs.
After Accel-Copy, we only do specific studies that we need to refresh. And, the down time
is much less and it's much easier to schedule. The disruption of work of other teams is very
minimal. And, also we can do it ourselves and we don't need a DBA to do it.
So, one business case how we use Accel-Copy is when we have to test newly created studies.
So, first what we do is create provisional study in production and then we extract study,
that specific study and global library from production using Accel-Copy.
Then we delete the global library from test instance using Accel-Copy. The whole study,
the whole library, I'm sorry. Then we import study and global library into test using Accel-Copy.
Then we activate the study definition and generate activation/derivation procedures
in TEST 1.
The finally, the projects teams perform the study setup testing in TEST. Then we fix any
issues found in Production and/or TEST. And then we get to refresh it one more time from
Production to TEST after the issues have been fixed using Accel-Copy. The we activate the
study definition in Production and we release the study to internal and external users in
Production.
The next business case is when we need to test change to the study and we don't want
to do it in Production. What we do is we delete the study TEST using Accel-Copy. We extract
the study from Production using Accel-Copy and when we do this extraction, there' no
down time in production.
Then we import the study into the TEST environment using Accel-Copy. And we have to generate
validation/derivation procedures in TEST. And we make protocol study changes in TEST.
We verify those changes in TEST. And then we make the same changes in Production.
Look at how we use Accel-Copy for the training database. So, what we do is we extract certain
study and global library from Production using Accel-Copy. Then we delete the study and global
library in Training instance using Accel-Copy.
And, we import the study and global library from Production to Training environment using
Accel-Copy. We activate the study in training. We grant access to the study in Training for
trainees. And, we finally conduct the training for the study, for the site and for the field
engineers. We allow continued access for users to do study in training going after study
has been released.
So, now let me do a little demonstration. So, I'm going to share my screen. Can you
guys see my screen?
Yeah, we can see it Ilya.
So when I enter Accel-Copy, this is the main interface that I see. So, the first thing
I do is I'm going to create an extract source data. So then I am going to select the source
parameters in Production. Then I'm going to select extract that and I'm going to do full
study and global library. It asks me for extraction name, so I'm just going to call it. Then it
asks for my email. This is my email where the notifications are going to go.
And then, I need to select the study number or the study name that I want to extract.
And this is a list of studies that are available. And, then I'm going to hit extract.
So, when I do this, the job is created. So, then I go to the job queue section of that
interface. And then here, I can search the different kinds of jobs. I can search by job
type, job status or by who did the job submitted for today. I'm going to find all the jobs
submitted today.
Here, it is where I can find jobs that I am going to extract and I'm doing this from the
production environment. And here, if I open this, in a drop down, it will give me a look
at what's going on. It will give me like a log of what's going on. It gives me a list
of tables being copied, the study being done, the environment, and the list of tables being
copied.
So, we can look at underscore the image table, then I have records being extracted from this
table. And then I look for another table. And it tells me how many. So, it tells me
for this specific site. It doesn't tell me for everything.
So, can you see I am still going through a table? It copied six records. So, I'll wait
for that. I am going to show you what was going on in the background. So, the background,
so the application will be stopped. And the extracted folder, it creates, the software
creates extracts of the, it creates a bunch of Excel files. Each file of a different table
in Oracle Clinical. And let's suppose if I open this table. So, it opens, it's going
to show me all the records that have been extracted.
So, this is an example. The table that opened, which was clinical studies and this is the
difference, the records being extracted right now. There is only one record for this study.
So, if a subject, look at the table in Oracle Clinical.
So, let me see. Okay, job finished. Looks like it finished. So, whenever the job finishes,
I get an email and the email just it says my extract. It says the extract name, the
extract study and it gives me a list of all the tables like I showed you before that have
been extracted. This is the list of tables. And I make sure there are no errors and that
everything is done.
Okay, so the next step. So, the next step is I need to impart to target. So, what I'm
going to do, I'm going to select the first one which is a production and a list of all
extracts available, so select which variable which is this one and then it gives the target
environment.
Now, here I only want to extract to the test or development environment. What I am going
to do is extract the production. So, there's a setting in the application where I can restrict
where I can extract. So, we're restricting it, so we're still looking at Production,
so we don't want to do that. I am going to do with development.
Share to the test environment. This is not bad, it's just going to test it, but it's
not going to extract any data. I'm going to uncheck that. And then, do I want to refresh
target? I am going to uncheck that because I already refreshed the target before for
this study, so I don't need to do it right now. Then it asks me for my email to where
the jobs are going to go to. I am just going to double check one more time for production,
extract name and then the target environment development. Then I hit import.
Now, it's going to give me the same thing saying that the job has been submitted. So
then I can go to the job queue, let's see here, what's going on with my job. So, then
I hit today. Then the window the window's job, it says import job, gives me the job
number then it gives me some information for the source environment and target environment.
So source is Production and Target is Development. Then it gives test only and refresh: No.
One thing I forgot to mention was, once this is done, it is going to give you like a table
count, if that's checked I accept 258 tables. And I extracted 19,438 rows. So, kind of unique.
So, what I am now doing is, again it's giving me this log information. So, I go from source
database to target database. And then the first thing it does is disable the triggers
on the source database, because we don't want to research into general tables, so that triggers
disabled. Also, it is a good practice not to have numerous in Oracle Clinical or RDC
in the target database, because if they are on the system it might affects us, it might
affect them. So, we usually just don't allow people on the system not very often that middle
tier services. From the middle tier.
Here comes the refresh. So, inbound just copied each individual tables and it tells me it
was successful or it failed and then if there's a conflict or not. So, if I want to pull up
this table OCL Study Site Roles, it will so show if there's a conflict. There's no conflict.
The time it takes for this to finish, depending on if the study is active or not and how much
data is in the study. So, if the student has a lot of data, it might take a little bit
longer to do. But, if there's no data, it takes less time.
So, here, on the left hand, this is the middle server, so there they are. So, copy's installed.
It gives me a little bit, if I go to the Accel - Copy folder. If I go to imported folder
and then if I go to OC-DEF, this is where the import is happening. So, this is the main
folder that is going on right now.
So, it gives me a list of tables again. So, unfinished tale, so let's open this table
for a look. This is a clinical study table. So, it's going to give me a list of records
which have been inserted and it will give me the status and whether it has been successful
or not.
So, in this case, there is only one record being imported, which is this record and the
status is inserted, as you can see here. So it is fine, no issues. It is kind of useful
for me to see everything. I can look directly at the records and see what's going on, so
it's for trouble shooting.
As you can see, so for example, if big table, for example, this table or this table DCMS_FFL_XML_Hist
table, what's the problem does it breaks it off into different files. So that, because
Excel has limitations, so Excel breaks them off into 2 different files, so that it doesn't
clog the table, it breaks them off like this.
So, as I can see, this is one case from DCMS and that one's full. But, all this specific
table. The list to see what's going on. So, this is still going on. I guess we can answer
questions now while we're waiting for this to finish.
Thank you Ilya. So, if there's any questions, feel free to type them into the chat mode
and then we'll, either Ilya or myself, will answer.
There's one question here: When copying a study to another environment, can the study
data be copied as well? Yes, absolutely. What the tool does is you can select these like
migration definitions. So, when you saw Ilya extracting the source data.... Can you go
back to that screen Ilya? The one that just shows the extraction and the list of different
options.
Yeah, so you see there's I guess about 9 different options here. So, if you select the full study
and global library that includes the data, that includes the discrepancies, the DCS reports,
all those things for a given study. You can also select, you know, just the patient data.
Let's say you have a study that was refreshed prior or maybe you have a database that was
cloned. Then, what can happen is you just select that patient data, delete out just
that patient data in your target environment.
And then you update it with the new patient data from the Production for example. So,
there's a variety of those different extract types that are available to you.
Another question is: What kind of IT system is used by Accel-Copy? Is it Oracle? Yes.
So, there's different ways to install Accel-Copy. They way that you're seeing right here is
with the interface. This has its own schema that gets put inside the Oracle Clinical schemer,
or database rather. And it is Oracle based so it's expecting and extracting data from
Oracle and putting it into Oracle.
The web, this browser interface is coming from an Apache service that's installed on
typically like your Oracle Clinical Applications servers. So, it's really meant to integrate
into what you already have setup for your environment.
The alternative way to install Accel-Copy is to do it where you don't install anything
on the target first environments, nothing in the database, nothing on the servers. But,
instead, you simply put the application on a sort of a machine that has internet access
to that database, databases. Then you can run it in a disconnected mode to do the extractions
from the command line.
So, you have those different options available to you.
I also want to show you, actually I'm going to switch the screen share for just a moment,
if that's okay with you Ilya? So I can show the matrix here. Okay, let me share, just
Accel... Actually. All right, we'll start with this.
So, what I'm looking at right now is a metadata file. And each of these different definitions
comes from a metadata file. And the reason why this is important, you know that not everyone
on the call is not technically oriented, but I guess the gist of it is you have control
over what's extracted and how it's extracted and what's been created in the different types.
So, if I wanted to know specifically what's extracted when I select the global library
type, I could look at the metadata file and it lists out all of the tables, it lists out
how it's comparing if the file if the data already exists or not. It even lists the select
statement that's being used to pull out the data. So, from that, you have more control
over what's actually coming out and how it even removes data when it does the delete.
And all of these things are driven by these metadata files.
Then you can modify for your particular use if necessary.
There's another question I saw here: Do you use the same database feed number for all
of the environments? Prod, Test Data & QA? That's a good question. So, the way that it
works is it's maintaining the IDs from your source study. So, the way you'd set up an
Accel-Copy environment is that you would potentially have your production environment and then
maybe clone it to your Test, into your DAS.
And, then after it's cloned to Test and DAS, you would change the field number, which is
the way it generates those IDs. You would change it on your Test and your DAS so that
when you're going forward, it would have its own feed number.
And, then from that case, you can copy whatever you want from production to Diaver or between
Devontest or these things going forward, because each environment will have different feed
numbers, although the data that were initially in there will have the same IDs as expected.
So, that's the way that's currently managed for the field number of the environments.
That is important to do because if you don't do it, Accel-Copy isn't going to work if you
don't change the field number.
So here's another question: What does feed number mean? So, feed number, Oracle Clinical
is set up for this thing we call disconnected replication. And this kind of replication
idea was that you could move studies and certain objects between different environments. But,
it's maintaining all of the pieces of each record in the database.
And each record in the database has a primary key, which is a unique identifier, usually
a number, like in the response table, it would be response ID or in DCMs table, it would
be the DCM ID. So, these ID numbers are generated through the auto sequence. And that sequence
increments. But, it starts at a certain value. The value that it starts at would be the field
number. So, if I had a field number of 7, I would have my sequence be updating from
like 100. So, the IDS would all be 7. 107, 114,121, etc, etc.
Now, if I had another database and my field number was, you know 11, then those IDs would
be 11. 111, 122 and so on. And by doing that with the field number and then using that
larger number for incrementing those sequences, we're assured that all of the IDs will be
unique in the different databases. So, that's what the field number is all about.
Okay, so another question: It was mentioned that when you extract from PROD into DEV,
you have to generate all of your procedures in DEV. Is this true? What if you want to
test the effect of a reversing procedure while preserving the discrepancies to see if it
changes or rather a large volume of discrepancies? If you have to regenerate all of your procedures,
then you couldn't test this.
Ilya, do you have any experience with this? The effect of having to regenerate the procedures
and what that does for your discrepancies?
You know, I don't know a word about this because it's our test database, so we just generate
them from backhand or logging into the Accel and just generating them. So, I don't really
know what about discrepancies.
But, I guess you don't have to regenerate them if you already have already copied the
study before. So, it's not the first time your copying it using Accel-Copy. If you copied
it once, then I'm thinking you don't need to generate it again.
Yeah, I'm not really sure. If you regenerate it from the backend, if it's the same as regenerating
it from the front end because it definitely reset all of your discrepancies and you know
gives you that alert. But from the back end I'm not convinced that it does that same thing.
I could be wrong. I will have to follow up on that particular area to be sure.
Yeah. What you do from the back end because it's more convenient so we don't have to do
it one by one. We can just do one step for everything, so that's why we do it.
By the way, the import just finished right now while we were taking questions.
Great.
So let's finish so we can see it completed. It took about 10 minutes to load everything
from study. It told me how many, 248 tables were copied and 19,266 rows were copied. And
then if I see the log. So it like I said, all the tables and if successful or not successful
copied. So, I can see patient positions 1610 records were copied and I can go through all
of this tables that were copied.
Not all of them were being copied because some of them don't have any entries for the
study, so some of them are 0. So, what I'm going to do is look at the drop file to make
sure that there's no errors, no fail records. You can do that with column or in consult
call. So, we just go through this and just check it in the records here. There's no errors.
And it also changes the owning location, yeah I'm not sure why it does that, but I think
it just to differentiate location. So, now my study is going to be in my development
environment. I just need to setup a couple of flag notes in Oracle Clinical to be able
to use our seed number. After that it is ready to go.
Ilya, can you talk about that conflict okay versus conflict fail piece?
Okay. See maybe if you can describe it a little bit.
Sure, yeah. So, if you look at these different categories for each table, you know, it shows
success and fail and conflict okay and then conflict fail. And, so whenever you're copying
a study sometimes there's global data that's utilized by multiple studies and sometimes
it's data that's specific to a study. So, you expect certain results. You know, if I'm
copying a study and it references lab units. Well, lab units are used by lots of studies.
So, in this case, you know, the system checks with the unique key does the lab unit exist.
Oh, it does, well, in this case, I expect it, so it's okay.
In the case of, I was copying a study and there was, like the clinical studies table
for example, and there was a conflict there based on the unique key. Well, then it would
go and say oh, oh, there's a conflict and this is actually a failure. You know, we don't
expect it to be a conflict here. So, this is a fail job and you need to look at it.
And then, like as Ilya showed, in each of ht output files it would flag like which of
the specific record failed, what the error was that happened and you could sort of diagnose
it that way. So, it's trying to make that delineation on each record as it's going.
So, for trouble shooting purposes, for informational purposes and then also you have certain expectations
built into the application of what would have a conflict when you do a copy expectedly so
and what would not. And it would be something that would have to draw attention to its self.
So, that's the concept behind that.
And you can here on Ilya's screen shot, you know, the status of inserted. Well if it was
in error or something happened, the status would be error and then you'd have a comment
with the actual error that happened. It just gives you better ability to trouble shoot.
It involves another test here, the inform that tells you what clearly was wrong, I believe.
If you go here, it's a little bit more trouble shooting, so you will know what's being run.
Yeah, exactly. Actually where you are tells you the native sequel is wrong. But, if you
even go over further, it does the actual sequel that is run from column T there. If you click
on it, it actually integrates all the IDs that were called and how it was ordered and
all of these kinds of things.
So, if you have more than one study, there would be a union in this and you could, you
know, even run the statement to see specifically what was being done.
It's kind of nice that you can, it's very dynamic, so you can actually, you can see
what is going on. It' kind of nice.
Yeah. In addition, a lot these shown where a here. If there any more questions, feel
free to type them in here.
Let me show you the matrix, not the movie, but the actual matrix of the study definitions.
Just a second here. Okay, it's flashing. Sure, it's very slow. Okay. Can you see my screen
everyone? Ilya, do you see it?
Ah, no, I don't see it. Your window's not shared. There it is now.
Now it came up. This is a big system, so.
It came up now.
It's there now? Okay. Okay, what you see along the left here, these different migration definitions
and along the right you see the different categories of objects in Oracle Clinical and
Remote Data Capture. And, so it actually flags what's being copied so if you have any questions
about it. So, it's like full study and glib, it's bringing in all of the metadata to this
study.
And, you know, questions, questions group, all the details in DCMI. It's not bringing
things like study sets, study unions, Where clauses, randomization, it's not bringing
any sets by design, but you know it could, like, if that's something that you wanted,
then we would enhance that particular migration definition to include those objects.
So, you can see, you know, these different categories have different areas that are being
brought over and then you can select between those as is appropriate for your, you know,
operation, you're working on completing.
Okay, there's a question: Is there a single location where it points out the sales or
do you have to scroll through every line of the email? So, what happens is it is just
a log, a log file. So, it's a log file that lists out your tables. So, there's probably
like 200 or so of them. And then, yeah, you have to look through those categories to see
that you know where the errors are.
But, it also takes a summation of the statuses. So, if you had a failure at the end of the
log file that says there was a failure, and then you would know you needed to go through
and look at the various items in the log file. And then, if you wanted to drill down further,
you could actually look at the actual data items that came over.
So, here it says exited with complete status. But, it would be failure if there was a problem.
And you can also see it from the screen here, there was error you would see status of errors
right here, from this screen.
There's a question about why the counts are not the same for the export and the import.
I think what happened is that the export doesn't actually have data in some of the tables,
so, it's only importing the ones that have the data pieces. Also there's some for extracting
the journal tables but it's actually not extracting anything. And this is actually so it can do
a delete properly later.
For the import, you don't actually import those journal tables, so there's a little
bit of variance in those numbers, but that's what's causing it.
It's also customizable, like Michelle explained what you can extract here, it is customizable
by different definitions. If we don't want some table, we can exclude it very easily
in extracting.
So, any other questions?
Okay, so that includes what I have to show for the webinar. So, I'll pass it back to
you Eugene.
Great, thank you guys. Do you have additional questions, feel free to ask them in the chat
box and we'll get to them.
But, in the mean time, this webinar is being recorded and as I mentioned earlier, it will
be posted on BioPharm's website within the next 24 hours. And we'll also be emailing
you a link to the recording and a PDF version of the presentation.
We have numerous webinars that are coming up in various different areas including data
management as well as safety and medical trial management. So, feel free to register for
those.
As always, feel free to email us with any questions. Call us and we'd like to thank
you very much for attending this webinar. We hope you found it useful. Thank you so
much for joining us, have a great rest of the day and evening.