ABI Software Book

This book is a collection of documentation covering software used within the ABI. This includes software developed internally and also other commonly used applications. The version of the book has been customised for a tutorial presented at the EMBC 2013 meeting.

Contents:

The Physiome Model Repository

The documentation found here is mainly aimed towards providing information to users of the Physiome Model Repository. This includes users interested in obtaining and running models from the respository, and those who wish to add models to the repository.

If you wish to deploy an instance of the repository software, PMR Software, please see the buildout repository on GitHub.

PMR - an introduction

The Physiome Model Repository (PMR) site is a web accessible repository of models which includes the CellML and FieldML repositories. (Note: the PMR site is powered by software called the PMR Software. Usually the term PMR will be referring to the PMR site, but if clarification is needed, "PMR site" will be used. It is also sometimes referred to as a "PMR instance", since it is possible to create other sites running the PMR Software, i.e. other instances.)

PMR relies on the distributed version control system Mercurial (Hg), which allows the repository to maintain a complete history of all changes made to every file contained within repository workspaces. In order to use the Physiome Model Repository, you will need to obtain a Mercurial client for your operating system, and become familiar with the basic functions of Mercurial. There are many excellent resources available on the internet, such as Mercurial, the definitive guide. Mercurial clients may be downloaded from the Mercurial website, which also provides documentation on Mercurial usage. A graphical alternative to a command-line client is available for Windows, called TortoiseHg. This provides a Windows explorer integrated system for working with Mercurial repositories.

Downloading and viewing models from the Physiome Model Repository

There are several ways of obtaining and using models from the Physiome Model Repository, and which you choose will depend on the way you intend to use the models. If you are simply interested in running a particular model and viewing the output, you can use links found on model exposure pages to get hold of the model files. There links available for a large number of models that will load the model directly into the OpenCell application, allowing you to explore simulation results with the help of a model diagram.

If you intend to use the model for further work, for example saving changes to the model or creating a new model based on an existing model or parts of an existing model, you should use Mercurial to obtain the files. In this way you also obtain the complete revision history of the files, and can add to this history as you make your own changes.

Searching the repository

The Physiome Model Repository has a basic search function that can be accessed by typing search terms into the box at the top right hand side of the page. You can use keywords such as cardiac or insulin, author names, or any other terms relevant to the models you want to find.

_images/PMR-downloading-searches1.png

The index page of the model repository provides two methods for finding models. There is a box for entering search terms, or you can click on categories based on model keywords to see all models in those categories.

If your search is yielding too many results, you may either try to narrow it down by choosing more or different keywords (eg. goldbeter 1991 instead of just goldbeter), or you can click the Advanced Search link just under the search box on the results page. This will take you to a search page where you can select specific item types (eg. exposures or workspaces), statuses, and other specifics.

_images/PMR-tut1-advancedsearch.png

In this search I have chosen to only have published exposures in my results.

Once you have found the model you are interested in, there are several ways you can view or download it.

Viewing models via the respository web interface

The most common use of the Physiome Model Repository web interface is probably to view information about models found on exposure pages, and to then download the models from these pages for simulation in a CellML supporting application.

Below is an example of a CellML exposure page. It contains documentation about the model(s), a diagram of the what the model(s) represent, and a navigation pane that allows the user to select between available versions of the model. Many models only have one version, but in this case there are two variants.

_images/PMR-exposureeg1.png

An example of a CellML exposure page.

If you click on one of the model variant navigation links, you will be taken to a sub-page of the exposure which will allow you to view the actual CellML model in a number of ways.

_images/PMR-exposureeg2.png

An example of a CellML exposure sub-page.

On this page there are a number of options under a Views available panel at the right hand side.

  • Documentation - displays the model documentation, already visible in the main area of the exposure page.
  • Model Metadata - displays information such as the citation information, model authorship details, and PMR keywords.
  • Model Curation - displays the curation stars for the model, also visible at the top right of the page. Future additions to the curation system mean that there will be additional information to be displayed on this page.
  • Mathematics - displays all the equations in the model in graphical form.
  • Generated code - shows a page where you can view the model in a number of different languages; C, C_IDA, Fortran 77, MATLAB, and Python. You can copy the generated code directly from this page to paste into your code editor.
  • Cite this model - this page provides generic information about how to cite models in the repository.
  • Source View - provides a raw view of the CellML (XML) model code.
  • Simulate using OpenCell - this link will download the model and open it with OpenCell if you have the software installed. If the model has a session file, this will include an interactive diagram which can be clicked on to display traces of the simulation results.

The OpenCell session that is loaded when clicking on the Simulate using OpenCell link looks something like this:

_images/PMR-sessionexample1.png

An OpenCell session. Objects such as membrane channels in the diagram can be clicked - this will toggle the graph traces displaying the values for those objects.

Downloading models via Mercurial

All data in PMR are stored in workspaces and each workspace is a Mercurial repository. The most comprehensive method of downloading content from PMR is to clone the workspace containing the desired data. In this manner you will have a local copy of the entire history of that data, including all provenance data, and the ability to step back through the history of the workspace to a state that may not be available via the download links in the exposure pages discussed above. If you would like to modify the contents of workspace, making use of Mercurial will ensure accurate provenance records are maintained as well as all the other benefits of using a version control system.

As software tools like OpenCOR and MAP evolve, they will be able to hide a lot of the Mercurial details and present the user with a user interface suitable for their specific application areas. Directly using Mercurial is, however, currently the most powerful way to leverage the full capabilities of PMR. Instructions for working with Mercurial can be found in the CellML repository tutorial.

Todo

Need to check this section on obtaining models via mercurial.

Working with PMR workspaces

Section author: David Nickerson

All models in the Physiome Model Repository exist in workspaces, which are Mercurial repositories that can be used to store any kind of file. Mercurial is a distributed version control system (DVCS).

In order to create your own workspaces, you will first need to create a repository account by registering at models.physiomeproject.org. Near the top right of the repository page there will be links labelled Log in and Register. Click on the register link, and follow the instructions.

Workspaces in the Physiome Model Repository are permanent once they are created. There is a teaching instance of the model repository which may be used for experimenting with PMR without worrying about creating permanent workspaces that might have errors in them. Users accounts from the main PMR instance will be copied to the teaching instance each time it is recreated, but users may register for an account just on the teaching instance if they prefer. Such accounts will need to be recreated each time the teaching instance is recreated.

Note

The teaching instance of the repository is a mirror of the main repository site found at http://models.physiomeproject.org/, running the latest development version of the PMR Software.

Any changes you make to the contents of the teaching instance are not permanent, and will be overwritten with the contents of the main repository whenever the teaching instance is upgraded to a new PMR Software release. For this reason, you can feel free to experiment and make mistakes when pushing to the teaching instance. Please subscribe to the cellml-discussion mailing list to receive notifications of when the teaching instance will be refreshed.

See the section Migrating content to the main repository for instructions on how to migrate any content from the teaching instance to the main (permanent) Physiome Model Repository.

Creating a new workspace

Once a user is logged into an instance of PMR, they will be presented with a My Workspaces link in the top toolbar, as shown below:

_images/my-workspaces.png

The first paragraph includes a link to your dashboard to add a new workspace, shown below:

_images/add-workspace-dashboard.png

Currently Mercurial is the only avialable option for the storage method for a new workspace, but this may be expanded to include other storage methods in future. A workspace should be given a meaningful title and a brief description to help locate the workspace using the repository search. Both these fields can be edited later, so don't worry if you don't get it perfect the first time.

Clicking the Add button with then create the workspace, which will initially be empty, as shown below:

_images/new-workspace.png

In the figure above, the URI of the newly created workspace has been highlighted. This is the URI that will be used when operating on the workspace using Mercurial.

Working with collaborators

PMR makes use of Mercurial to manage individual workspaces. Mercurial is a Distributed Version Control System (DVCS), and as such encourages collaborative development of your model, dataset, results, etc. Using Mercurial, each member of the development team is able to have their own clone of the workspace which can be kept synchronized with the other members of the development team, while ensuring that each team member's contributions are accurately recorded in the workspace history.

Once a PMR workspace has been published, any user of the repository is able to access and clone the workspace, including team members and the anonymous public. Only privileged PMR members are able to make changes to the workspace, including pushing changes into the Mercurial repository. Private PMR workspaces, however, can only be viewed by those PMR members that have been granted access.

PMR provides access controls to manage the ability of PMR members and anonymous users to interact with workspaces. The access control is managed via the Sharing tab for a given workspace, as shown below.

_images/sharingTab.png

By default, you will initially see the list of repository administrators and curators will have some permissions to access your workspace. Most of these can be turned off if you choose, but is generally not recommended as they will need access in most cases if you need help with your workspace. Using the Sharing tab you are able to search for other PMR members, such as the names of people in your development team. These members would then appear in the list of members and you are able to set their access as required.

Using the Sharing controls there are currently four possible permissions that can be controlled. The Can add and Can edit permissions relate to the object that represents the workspace in the website database and are generally left in the default state. When selected for a given member, the Can view permission allows that member to view the workspace on the website, even if the workspace is private. Similarly, when the Can hg push permission is enabled the selected member is able to push into the workspace - this is the most important permission as enabling this allows members to add, modify, and delete the actual content of the workspace. One benefit of using Mercurial means that even if one of the privileged members accidentally modifies the workspace in a detrimental manner, you are able to revert the workspace back to the previous state.

When working in a collaborative team you would generally enable the Can hg push and Can view permissions for all team members and only enable the Can add and Can edit permissions for the team members responsible for the workspace presentation in the PMR website.

Uploading files to your workspace

The basic process for adding content to a workspace consists of the following steps:

  1. Clone the workspace to your local machine.
  2. Add files to cloned workspace.
  3. Commit the files using a Mercurial client.
  4. Push the workspace back to the repository.

An example demonstrating these steps can be found in in this tutorial step: Populate with content.

CellML Model Repository tutorial

Section author: David Nickerson, Randall Britten, Dougal Cowan

About this tutorial

The CellML model repository is an instance of the Physiome Model Repository (PMR) customised for CellML models. PMR currently relies on the distributed version control system Mercurial (Hg), which allows the repository to maintain a complete history of all changes made to every file it contains. This tutorial demonstrates how to work with the repository using TortoiseHg, which provides a Windows explorer integrated system for working with Mercurial repositories.

Brief mention of the equivalent command line versions of the TortoiseHg
actions will also be mentioned, so that these ideas can also be used without
a graphical client, and on Linux and similar systems. These will be denoted
by boxes like this.

This tutorial requires you to have:

PMR concepts

PMR and the CellML model repository use a certain amount of jargon - some is specific to the repository software, and some is related to distributed version control systems (DVCSs). Below are basic explanations of some of these terms as they apply to the repository.

Workspace
A container (much like a folder or directory on your computer) to hold the files that make up a model, as well as any other files such as documentation or metadata, etc. In practical terms, each workspace is a Mercurial repository.
Exposure
An exposure is a publicly viewable presentation of a particular revision of a model. An exposure can present one or many files from your workspace, along with documentation and other information about your model.

The Mercurial DVCS has a range of terms that are useful to know, and definitions of these terms can be found in the Mercurial glossary: http://mercurial.selenic.com/wiki/Glossary.

Working with the repository web interface

This part of the tutorial will teach you how to find models in the Physiome Model Repository (http://models.physiomeproject.org), how to view a range of information about those models, and how to download models. The first page in the repository consists of basic navigation, a link to the main model listing, a search box at the top right, and a list of model category links as shown below.

_images/PMR-tut1-mainscreen.png

The front page of the Physiome Model Repository.

Model listings

Clicking on the main model listing or any of the category listings will take you to a page displaying a list of exposed models in that category. Click on electrophysiology for example, and a list of over 100 exposed models in that category will be displayed, as shown here.

_images/PMR-tut1-modellistings.png

A list of models in the electrophysiology category.

Clicking on an item in the list will take you to the exposure page for that model.

Searching the repository

You can search for the model that you wish to work on by entering a search term in the box at the top right of the page. Many of the models in the repository are named by the first author and publication date of the paper, so a good search query might be something like goldbeter 1991. A list of the results of your search will probably contain both workspaces and exposures - you will need to click on the workspace of the model you wish to work on. Workspaces can be identified because their links are pale blue and have no details line following the clickable link. In the following screenshot, the first two results are workspaces, and the remainder are exposures.

_images/PMR-tut1-searchresults.png

A search results listing on the Physiome Model Repository site.

Click on an exposure result to view information about the model and to get links for downloading or simulating the model. Click on workspaces to see the contents of the model workspace and the revision history of the model.

Working with the repository using Mercurial

This part of the tutorial will teach you how to clone a workspace from the model repository using a Mercurial client, create your own workspace, and then push the cloned workspace into your new workspace in the repository. We will be using a fork of an existing workspace, which provides you with a personal copy of a workspace that you can edit and push changes to.

Registering an account and logging in

First, navigate to the teaching instance of the Physiome Model Repository at http://teaching.physiomeproject.org/.

Note

The teaching instance of the repository is a mirror of the main repository site found at http://models.physiomeproject.org/, running the latest development version of the PMR Software.

Any changes you make to the contents of the teaching instance are not permanent, and will be overwritten with the contents of the main repository whenever the teaching instance is upgraded to a new PMR Software release. For this reason, you can feel free to experiment and make mistakes when pushing to the teaching instance. Please subscribe to the cellml-discussion mailing list to receive notifications of when the teaching instance will be refreshed.

See the section Migrating content to the main repository for instructions on how to migrate any content from the teaching instance to the main (permanent) Physiome Model Repository.

In order to make changes to models in the CellML repository, you must first register for an account. The Log in and Register links can be found near the top right corner of the page. Your account will have the appropriate access privileges so that you can push any changes you have made to a model back into the repository.

Click on the Register link near the top right, and fill in the registration form. Enter your username and desired password. After completing the email validation step, you can now log in to the repository.

Note

This username and password are also the credentials you use to interact with the repository via Mercurial.

Once logged in to the repository, you will notice that there is a new link in the navigation bar, My Workspaces. This is where all the workspaces you create later on will be listed. The Log in and Register links are also replaced by your username and a Log out link.

Mercurial username configuration

Important

Username setup for Mercurial

Since you are about to make changes, your name needs to be recorded as part of the workspace revision history. When commit your changes using Mercurial, it is initially "offline" and independent of the central PMR instance. This means that you have to set-up your username for the Mercurial client software, even though you have registered a username on the PMR site.

You only need to do this once.

Steps for TortoiseHg:

  • Right click on any file or folder in Windows Explorer, and select TortoiseHg ‣ Global Settings.
  • Select Commit and then enter your name followed by your e-mail address in "angle brackets" (i.e. less-than "<" and greater-than ">"). Actually, you can enter anything you want here, but this is the accepted best practice. Note that this information becomes visible publicly if the PMR instance that you push you changes to is public.

Steps for command line:

  • Edit the config text file:
    • For per repository settings, the file in the repository: <repo>\.hg\hgrc
    • System-wide settings for Linux: %USERPROFILE%\.hgrc
    • System-wide settings for Windows: %USERPROFILE%\mercurial.ini
  • Add the following entry:

    [ui]
    username = Firstname Lastname <firstname.lastname@example.net>
    
Forking an existing workspace

Important

It is essential to use a Mercurial client to obtain models from the repository for editing. The Mercurial client is not only able to keep track of all the changes you make (allowing you to back-track if you make any errors), but using a Mercurial client is the only way to add any changes you have made back into the repository.

For this tutorial we will fork an existing workspace. This creates new workspace owned by you, containing a copy of all the files in the workspace you forked including their complete history. This is equivalent to cloning the workspace, creating a new workspace for yourself, and then pushing the contents of the cloned workspace into your new workspace.

Forking a workspace can be done using the Physiome Model Repository web interface. The first step is to find the workspace you wish to fork. We will use the Beeler, Reuter 1977 workspace which can be found at: http://teaching.physiomeproject.org/workspace/beeler_reuter_1977.

Now click on the fork option in the toolbar, as shown below.

_images/PMR-fork1.png

You will be asked to confirm the fork action by clicking the Fork button. You will then be shown the page for your forked workspace.

Cloning your forked workspace

In order to make changes to your workspace, you have to clone it to your own computer. In order to do this, copy the URI for mercurial clone/pull/push as shown below:

_images/PMR-tut1-cloneurl.png

Copying the URI for cloning your workspace.

In Windows explorer, find the folder where you want to create the clone of the workspace. Then right click to bring up the context menu, and select TortoiseHG ‣ Clone as shown below:

_images/PMR-tut1-tortoisehgclone.png

Paste the copied URL into the Source: area and then click the Clone button. This will create a folder called beeler_reuter_1977_tut that contains all the files and history of your forked workspace. The folder will be created inside the folder in which you instigated the clone command.

Command line equivalent

hg clone [URI]

You will need to enter your username and password to clone the workspace, as the fork will be set to private when it is created.

The repository will be cloned within the current directory of your command line window.

Making changes to workspace contents

Your cloned workspace is now ready for you to edit the model file and make a commit each time you want to save the changes you have made. As an example, open the model file in your text editor and remove the paragraph which describes validation errors from the documentation section, as shown below:

_images/PMR-tut1-editcellmlfile.png

Save the file. If you are using TortoiseHg, you will notice that the icon overlay has changed to a red exclamation mark. This indicates that the file now has uncommitted changes.

Committing changes

If you are using TortoiseHg, bring up the shell menu for the altered file and select TortoiseHg ‣ Hg Commit. A window will appear showing details of the changes you are about to commit, and prompting for a commit message. Every time you commit changes, you should enter a useful commit message with information about what changes have been made. In this instance, something like "Removed the paragraph about validation errors from the documentation" is appropriate.

Click on the Commit button at the far left of the toolbar. The icon overlay for the file will now change to a green tick, indicating that changes to the file have been committed.

_images/PMR-tut1-commitchanges.png

Command line equivalent

hg commit -m "Removed the paragraph about validation errors from the documentation"
Pushing changes to the repository

Your cloned workspace on your local machine now has a small history of changes which you wish to push into the repository.

Right click on your workspace folder in Windows explorer, and select TortoiseHg ‣ Hg Synchronize from the shell menu. This will bring up a window from which you can manage changes to the workspace in the repository. Click on the Push button in the toolbar, and enter your username and password when prompted.

_images/PMR-tut1-pushchanges.png

Command line equivalent

hg push

Now navigate to your workspace and click on the history toolbar button. This will show entries under the Most recent changes, complete with the commit messages you entered for each commit, as shown below:

_images/PMR-tut1-newhistoryentry.png

Create an exposure

As explained earlier, an exposure aims to bring a particular revision to the attention of users who are browsing and searching the repository.

There are two ways of making an exposure - creating a new exposure from scratch, or "Rolling over" an exposure. Rolling over is used when a workspace already has an existing exposure, and the updates to the workspace have not fundamentally changed the structure of the workspace. This means that all the information used in making the previous exposure is still valid for making a new exposure of a more recent revision of the workspace. Strictly speaking, an exposure can be rolled over to an older revision as well, but this is not the usual usage.

As you are working in a forked repository, you will need to create a new exposure from scratch. To learn how to create exposures, please refer to Creating CellML exposures.

Migrating content to the main repository

As noted above, the teaching instance used in this tutorial is not suitable for permanent storage of your work. One of the advantages of using a distributed version control system to manage PMR workspaces is that it is straightforward to move the entire workspace, including the full history and provenance record, from one location to another. Recent releases of PMR Software have also provided the feature to export exposures so that they can then be imported into another PMR Software instance.

If you would like to move your work from the teaching instance of the model repository into a new workspace on the main repository (or from any PMR Software instance to another one), you should follow these steps:

  1. Ensure that you have pushed all your commits to the source instance;
  2. Create the new workspace in the destination repository;
  3. Navigate to the workspace created and choose the synchronize action from the workspace toolbar, as shown below.
_images/PMR-synchronize-form.png
  1. Fill in the URI of your workspace on the source instance (e.g., http://models.physiomeproject.org/w/andre/cortassa-ECME-2006)
  2. Click the Synchronize button.

In a similar manner, you are able to copy exposures you might have made on the teaching instance over to the main repository, or from the main to the teaching instance if you want to test things out. Follow these steps to migrate an exposure from one repository to another.

  1. Navigate to the exposure you would like to migrate in the source repository.
  2. Choose the wizard item from the toolbar as shown below.
_images/exposure-wizard-highlight-export.png
  1. In the destination repository, navigate to the desired revision of the (published) workspace and choose the Create exposure action as described in the directions for creating an exposure from scratch
  2. Rather than building a new exposure, choose the Exposure Import via URI tab in the exposure creation wizard, as shown below.
_images/exposure-wizard-import-from-uri.png
  1. Copy and paste the URI from the source exposure wizard, highlighted above, into the Exposure Export URI field in the exposure creation wizard shown above.
  2. Click the Add button. This will take you back to the standard exposure build page, but now with all the fields pre-populated from the source exposure.
  3. Navigate to the bottom of the page and click the Build button to actually build the exposure pages. You are free to reconfigure the exposure if desired, some help is available for this if needed.

Creating CellML exposures

Section author: Dougal Cowan

CellML models in the Physiome Model Repository are presented through exposures. An exposure is a view of a particular revision of a workspace, and is quite flexible in terms of what it can present. A workspace may contain one or more models, and any number of models may be presented in a single exposure. Exposures generally take the form of some documentation about the model(s), a range of ways of looking at the model(s) or their metadata, and links to download the model(s).

The example below shows the main exposure page for the Bondarenko et al. 2004 workspace. This workspace contains two models, which can be viewed via the Navigation pane on the right hand side of the page.

_images/PMR-exposureeg1.png

Example of an exposure page

If you click on one of the model navigation links, it will take you to the page for that particular model. Exposures most often present a single model, although they can present any number of models, each with its own documentation and views.

_images/PMR-exposureeg2.png

Example of a model exposure page

Most of the CellML exposures in the repository are currently of this type, with a main documentation page containing navigation links to the model or models themselves.

The model pages have links that enable the user to do things like view the model equations, look at the citation information, or run the model as an interactive session using the OpenCell application. These links are found in the pane titled Views available on the right hand side of the page.

This tutorial contains instructions on how to create one of these standard CellML exposures, as well as information about how to create other alternative types of exposure.

Creating standard CellML exposures

Note

In order to create an exposure of a workspace, the workspace must be published. You will need to submit your workspace for publication and await review. It is not possible to create exposures in private workspaces.

In this example I will use a fork of the the Beeler Reuter 1977 workspace. Creating a fork of a workspace creates a clone of that workspace that you own, and can push changes to. You can fork any publicly available workspace in the Physiome Model Repository. For more information on this feature of PMR, refer to the information on features or collaboration, or see the relevant section of the tutorial.

At this point you will need to submit the workspace for publication, using the state: menu at the top right of the workspace view page.

_images/PMR-submitworkspaceforpublication.png

The state menu is used to submit objects such as workspaces for publication. Submitted items will be reviewed by site administrators and then published.

You will need to wait for your workspace to be made public before you can carry on and create an exposure of your workspace.

Choose the revision to expose

As an exposure is created to present a particular revision of a workspace, the first thing to do is to navigate to that revision. To do this, first find the workspace - if this is your own workspace, you can click on the My Workspaces button in the navigation bar of the repository and find the workspace of interest in the listing displayed. After navigating to your workspace, click on the history button in the menu bar.

_images/PMR-workspacehistory.png

The revision history of a fork of the Beeler Reuter 1977 workspace

Now you can select the revision of the workspace you wish to expose by clicking on the manifest of that revision. Usually you will want to expose the latest revision, which appears at the top of the list.

After selecting the revision you wish to expose, click on the workspace actions menu at the far right end of the menu bar and select create exposure.

_images/PMR-revisioncreateexposure.png

Selecting the manifest of the revision to expose

Building the exposure

Selecting the create exposure option in the menu bar will bring you to the first page of the exposure wizard. This web interface allows you to select the model files, documentation files, and settings that will be used to create the exposure.

The initial page of the exposure creation wizard allows you to select the main documentation file and the first model file. Select the HTML annotator option and the HTML documentation file for the workspace in the Exposure main view section. For the New Exposure File Entry section, choose the CellML file you wish to expose, and select CellML as the file type.

_images/PMR-wizard1.png

Selecting the main documentation and the first CellML model file

Note

Documentation should be written in HTML format. Some previous users of the CellML repository may be familiar with the tmpdoc style documentation, which has be deprecated. For an example of what a fairly standard HTML documentation file might look like, take a look at the documentation for the Beeler Reuter 1977 model.

Once you have selected the documentation and model files and their types, click on the Add button. This will take you to the next step of the wizard, where you can select various options for the model you have chosen to expose, and will allow you to add further model files to the exposure if desired.

The wizard shows a subgroup for each CellML file to be included in the exposure. For each CellML file, select the following options:

  • Documentation
    • Documentation file - select the HTML file created to document the model
    • View generator - select HTML annotator option
  • Basic Model Curation
    • Curation flags - CellML model repository curators may select flags according to the status of the model
  • License and Citation
    • File/Citation format - select CellML RDF metadata to automatically generate a citation page using the model RDF
    • License - select Creative Commons Attributions 3.0 Unported
  • Source Viewer
    • Language Type - select xml
  • OpenCell Session Link
    • Session File - select the session.xml if it has been created
_images/PMR-wizard2.png

Selecting options for the model file subgroup

After selecting the subgroup options, you need to click the Update button to set the chosen options for the exposure builder. If you do not update the subgroup, the options you selected will be replaced by the default options when you click Build.

For exposures where you wish to expose multiple models, click on the Add file button at this stage to create another subgroup. You can then use this to set up all the same options listed above for the additional model file. Remember to click Update when you have completed selecting the options for each subgroup before adding another subgroup.

After setting all the options for the models you wish to expose, click on the Build button. The repository software will then create the exposure pages and display the main page of the exposure.

In order to make the exposure visible and searchable, you will need to publish it. You can choose to submit your exposure for review, or if you have sufficient privileges you can publish it directly.

_images/PMR-exposurepublish.png

Publish your exposure to make it visible to others.

Other types of exposure

Because the exposure builder uses HTML documentation, it is possible to create customized types of exposure that differ from the standard type shown above. For example, you might want to create an exposure that simply documents and provides links to models in a PMR workspace that are encoded in languages other than CellML. You can also use the HTML documentation to provide tutorials or other documents, with resources stored in the workspace and linked to from the HTML.

Examples of other exposure types:

Making an exposure using "roll-over"

As explained earlier, an exposure aims to bring a particular revision to the attention of users who are browsing and searching the repository.

"Rolling over" an exposure is the method used when a workspace already has an existing exposure, and the updates to the workspace have not fundamentally changed the structure of the workspace. This means that all the information used in making the previous exposure is still valid for making a new exposure of a more recent revision of the workspace. Strictly speaking, an exposure can be rolled over to an older revision as well, but this is not the usual usage.

Note

A forked workspace contains all of the revision history of the workspace it was created from, but does not contain any of the exposures that existed for the original workspace. You will always need to create an exposure from scratch in newly forked repositories.

From the view page of your workspace, select "exposure rollover".

_images/PMR-tut1-rolloverbutton.png

The exposure rollover button takes you to a list of revisions of the workspace, with existing exposures on the right hand side, and revision ids on the left. Each revision id has a radio button, used to select the revision you wish to create a new rolled over exposure for. Each existing exposure also has a radio button, used to select the exposure you wish to base your new one on. The most common use case is to select the latest exposure and the latest revision, and then click the Migrate button at the bottom of the list.

_images/PMR-tut1-rolloverlist.png

The new exposure will be created and displayed. When a new exposure is created, it is initially put in the private state. This means that only the user who created it or other users with appropriate permissions can see it, and it will not appear in search results or model listings. In order to publish the exposure, you will need to select submit for publication from the state menu.

_images/PMR-tut1-submitforpublication.png

The state will change to "pending review". The administrator or curators of the repository will then review and publish the exposure, as well as expiring the old exposure.

Creating FieldML exposures

Section author: Dougal Cowan

FieldML models in the Physiome Model Repository are presented through exposures. A FieldML exposure has some similarities to a CellML exposure - usually consisting of a main documentation page with some information about the model, accompanied by a range of different views of the model data and or metadata. FieldML exposures also allow the real-time three-dimensional display of model meshes within the browser through the use of the Zinc plugin.

The example screenshots below show the main documentation page view and the 3D visualization provided by the Zinc viewer.

_images/PMR-fieldmlexposureexample1.png

The main documentation view of a FieldML exposure

_images/PMR-fieldmlexposureexample2.png

The main Zinc viewer view of the same FieldML exposure

Creating the exposure files

To create a FieldML exposure, the following files will need to be stored in a workspace in PMR:

  • The FieldML model file(s)
  • An RDF file containing metadata about the model, and specifying the JSON file to be used to specify the visualization.
  • The JSON file that specifies the Zinc viewer visualization.
  • Optionally, documentation (HTML) and images (PNG, JPG etc).

The following example RDF file from comes from the Laminar Structure of the Heart workspace in the FieldML repository:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<?xml version="1.0" encoding="utf-8"?>
<rdf:RDF
      xmlns="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
      xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
      xmlns:dc="http://purl.org/dc/elements/1.1/"
      xmlns:dcterms="http://purl.org/dc/terms/"
      xmlns:vCard="http://www.w3.org/2001/vcard-rdf/3.0#"
      xmlns:pmr2="http://namespace.physiomeproject.org/pmr2#">
   <rdf:Description rdf:about="">
      <dc:title>
            Laminar structure of the Heart: A mathematical model.
      </dc:title>
      <dc:creator>
         <rdf:Seq>
            <rdf:li>LeGrice, I.J.</rdf:li>
            <rdf:li>Hunter, P.J.</rdf:li>
            <rdf:li>Smaill, B.H.</rdf:li>
         </rdf:Seq>
      </dc:creator>
      <dcterms:bibliographicCitation>
            American Journal of Physiology 272: H2466-H2476, 1997.
      </dcterms:bibliographicCitation>
      <dcterms:isPartOf rdf:resource="info:pmid/9176318"/>
      <pmr2:annotation rdf:parseType="Resource">
         <pmr2:type
               rdf:resource="http://namespace.physiomeproject.org/pmr2/note#json_zinc_viewer"/>
         <pmr2:fields>
            <rdf:Bag>
               <rdf:li rdf:parseType="Resource">
                  <pmr2:field rdf:parseType="Resource">
                     <pmr2:key>json</pmr2:key>
                     <pmr2:value>heart.json</pmr2:value>
                  </pmr2:field>
               </rdf:li>
            </rdf:Bag>
         </pmr2:fields>
      </pmr2:annotation>
   </rdf:Description>
</rdf:RDF>

This file provides citation metadata and a reference to the resource that specifies the Zinc viewer JSON file which will be used to describe the 3D visualisation of the FieldML model. The file breaks down into three main sections:

  • Lines 3-8, namespaces used.
  • Lines 10-23, citation metadata.
  • Lines 24-37, resource description. Used to specify the JSON file that specifies the visualisation.

Example of the JSON file from the same (Laminar Structure of the Heart) workspace:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
{
    "View" : [
      {
      "camera" : [9.70448, -288.334, -4.43035],
      "target" : [9.70448, 6.40667, -4.43035],
      "up"     : [-1, 0, 0],
      "angle" : 40
      }
    ],
    "Models": [
        {
            "files": [
                "heart.xml"
            ],
            "externalresources": [
                "heart_mesh.connectivity",
                "heart_mesh.node.coordinates"
            ],
            "graphics": [
                {
                    "type": "surfaces",
                    "ambient" : [0.4, 0, 0.9],
                    "diffuse" : [0.4, 0,0.9],
                    "alpha" : 0.3,
                    "xiFace" : "xi3_1",
                    "coordinatesField": "heart.coordinates"
                },
                {
                    "type": "surfaces",
                    "ambient" : [0.3, 0, 0.3],
                    "diffuse" : [1, 0, 0],
                      "specular" : [0.5, 0.5, 0.5],
                    "shininess" : 0.5,
                    "xiFace" : "xi3_0",
                    "coordinatesField" : "heart.coordinates"
                },
                {
                    "type": "lines",
                    "coordinatesField" : "heart.coordinates"
                }
            ],
            "elementDiscretization" : 8,
            "region_name" : "heart",
            "group": "Structures",
            "label": "heart",
            "load": true
        }
   ]
}
  • Lines 2-8, sets up the camera or viewpoint for the initial Zinc viewer display.
  • Lines 12-18, specifies the FieldML model files
  • Lines 19-41, set up the actual visualisations of the mesh - in this case, two different surfaces and a set of lines.
  • Lines 42-46, specify global visualisation settings.

For more information on these settings, please see the cmgui documentation.

Note

The specifics of these RDF and JSON files are a work in progress, and may change with each new version of the Zinc viewer plugin or the PMR software.

Creating the exposure in the Physiome Model Repository

First you will need to create a workspace to put your model in, following the process outlined in the document on working with workspaces.

  • Upload your FieldML model files and Zinc viewer specification files.
  • Find revision of workspace you wish to expose and create exposure
Exposure wizard procedure

View generator as per CellML; select HTML annotator and HTML doc file

New exposure file entry: select .rdf file and select FieldML (JSON) type. Click Add.

Documentation file - same as above Curation flags - none (should be removed?) No other settings

Click Update.

Click Build.

To see the 3D visualisation, you will need to have the latest Zinc plugin installed.

Embedded workspaces and their uses

Section author: David Nickerson

Todo

This section needs more work.

Workspaces in PMR are currently implemented as Mercurial repositories. One Mercurial feature that is quite useful in the context of the PMR is nested repositories. Using the more general PMR concepts, we term such nesting as embedded workspaces.

Embedded workspaces:

  • are intended to manage the separation of modules which are integrated to create a model;
  • facilitate the sharing and reuse of model components independently from the source model;
  • enable the development of the modules to proceed independently, thus the version of the workspaces embedded is also tracked; and
  • allow authors to make use of relative URIs when linking between data resources providing a file system agnostic method to describe complex module relationships in a portable manner.

Workspaces can be embedded at a specific revision or set to track the most recent revision of the source workspace. Changes made to the source workspace will not affect any embedding workspace until the author explicitly chooses to update the embedded workspace. This provides the author with the opportunity to review the changesets and make an informed decision regarding alterations to embedded revisions. Any alterations in the specific revision of an embedded workspace is data captured in a changeset in the embedding workspace – thus providing a clear provenance record of the entire dataset in the workspace.

Uses

Best practice

See also the recommendations from the Mercurial project.

CellML Curation in Legacy Repository Software

As PMR contains much of the data ported over from the legacy software products that powered the CellML Model Repository, the curation system from that system was ported to PMR verbatim. This document describing the curation aspect of the repository is derived from documentation on the CellML site.

CellML Model Curation: the Theory

The basic measure of curation in a CellML model is described by the curation level of the model document. We have defined four levels of curation:

  • Level 0: not curated.
  • Level 1: the CellML model is consistent with the mathematics in the original published paper.
  • Level 2: the CellML models has been checked for (i) typographical errors, (ii) consistency of units, (iii) that all parameters and initial conditions are defined, (iv) that the model is not over-constrained, in the sense that it contains equations or initial values which are either redundant or inconsistent, and (v) that running the model in an appropriate simulation environment reproduces the results published in the original paper.
  • Level 3: the model is checked for the extent to which it satisfies physical constraints such as conservation of mass, momentum, charge, etc. This level of curation needs to be conducted by specialised domain experts.

CellML Model Curation: the Practice

Our ultimate aim is to complete the curation of all the models in the repository, ideally to the level that they replicate the results in the published paper (level 2 curation status). However, we acknowledge that for some models this will not be possible. Missing parameters and equations are just one limitation; at this point it should also be emphasised that the process of curation is not just about "fixing the CellML model" so that it runs in currently available tools. Occasionally it is possible for a model to be expressed in valid CellML, but not yet able to be solved by CellML tools. An example is the seminal Saucerman et al. 2003 model, which contains ODEs as well as a set of non-linear algebraic equations which need to be solved simultaneously. The developers of the CellML editing and simulation environment OpenCell are currently working on addressing these requirements.

The following steps describe the process of curating a CellML model:

  • Step 1: the model is run through OpenCell and COR. COR in particular is a useful validation tool. It renders the MathML in a human readable format making it much easier to identify any typographical errors in the model equations. COR also provides a comprehensive error messaging system which identifies typographical errors, missing equations and parameters, and any redundancy in the model such as duplicated variables or connections. Once these errors are fixed, and assuming the model is now complete, we compare the CellML model equations with those in the published paper, and if they match, the CellML model is awarded a single star - or level 1 curation status.
  • Step 2: Assuming the model is able to run in OpenCell and COR, we then go onto compare the CellML model simulation output from COR and OpenCell with the published results. This is often a case of comparing the graphical outputs of the model with the figures in the published paper, and is currently a qualitative process. If the simulation results from the CellML model and the original model match, the CellML model is awarded a second star - or level 2 curation status.
  • Step 3: if, at the end of this process, the CellML model is still missing parameters or equations, or we are unable to match the simulation results with the published paper, we seek help from the original model author. Where possible, we try to obtain the original model code, and this often plays an invaluable role in fixing the CellML model.
  • Step 4: Sometimes we have been able to engage the original model author further, such that they take over the responsibility of curating the CellML model themselves. Such models include those published by Mike Cooling and Franc Sachse. In these instances the CellML model is awarded a third star - or level 3 curation status. While this is laudable, ideally we would like to take the curation process one step further, such that level 3 curation should be performed by a domain expert who is not the author of the original publication (i.e., peer review). This expert would then check the CellML model meets the appropriate constraints and expectations for a particular type of model.

A point to note is that levels 1 and 2 of the CellML model curation status may be mutually exclusive - in our experience, it is rare for a paper describing a model to contain no typographical errors or omissions. In this situation, Version 1 of a CellML model usually satisfies curation level 1 in that it reflects the model as it is described in the publication - errors included, while subsequent versions of the CellML model break the requirements for meeting level 1 curation in order to meet the standards of level 2. Taking this idea further, this means that a model with 2 yellow stars doesn't necessarily meet the requirements of level 1 curation but it does meet the requirements of level 2. Hopefully this conflict will be resolved when we replace the current star system with a more meaningful set of curation annotations.

Ultimately, we would like to encourage the scientific modeling community - including model authors, journals and publishing houses - to publish their models in CellML code in the CellML model repository concurrent with the publication of the printed article. This will eliminate the need for code-to-text-to-code translations and thus avoid many of the errors which are introduced during the translation process.

CellML Model Simulation: the Theory and Practice

As part of the process of model curation, it is important to know what tools were used to simulate (run) the model and how well the model runs in a specific simulation environment. In this case, the theory and the practice are essentially the same thing, and carry out a series of simulation steps which then translate into a confidence level as part of a simulator's metadata for each model. The four confidence levels are defined as:

  • Level 0: not curated (no stars);
  • Level 1: the model loads and runs in the specified simulation environment (1 star);
  • Level 2: the model produces results that are qualitatively similar to those previously published for the model (2 stars);
  • Level 3: the model has been quantitatively and rigorously verified as producing identical results to the original published model (3 stars).

Note

The teaching instance of the repository is a mirror of the main repository site found at http://models.physiomeproject.org/, running the latest development version of the PMR Software.

Any changes you make to the contents of the teaching instance are not permanent, and will be overwritten with the contents of the main repository whenever the teaching instance is upgraded to a new PMR Software release. For this reason, you can feel free to experiment and make mistakes when pushing to the teaching instance. Please subscribe to the cellml-discussion mailing list to receive notifications of when the teaching instance will be refreshed.

See the section Migrating content to the main repository for instructions on how to migrate any content from the teaching instance to the main (permanent) Physiome Model Repository.

Using SED-ML to specify simulations

Section author: Dougal Cowan

Hopefully PMR will support SED-ML simulations as part of the CellML views.

Todo

  • Update all PMR documentation to reflect workspace ID changes and user workspace changes, if they go ahead.
  • Get embedded workspaces doc written.
  • Get some best practice docs written.

OpenCOR

OpenCOR is an open source, cross-platform and CellML-based modelling environment. The following documentation refers to the 2013-06-22 version of OpenCOR, for which supported platforms can be found here.

OpenCOR provides two types of user interfaces:

Command Line Interface (CLI)

Note

The CLI version of OpenCOR currently offers a limited number of features. You might therefore want to use its Graphical User Interface (GUI) version instead.

Help

$ ./OpenCOR -h
Usage: OpenCOR [-a|--about] [-c|--command [<plugin>::]<command> <options>] [-h|--help] [-p|--plugins] [-v|--version] [<files>]
 -a, --about     Display some information about OpenCOR
 -c, --command   Send a command to one or all the plugins
 -h, --help      Display this help information
 -p, --plugins   Display the list of plugins
 -v, --version   Display the version of OpenCOR

Version

$ ./OpenCOR -v
OpenCOR [2013-06-22] (32-bit)

About

$ ./OpenCOR -a
OpenCOR [2013-06-22] (32-bit)
GNU/Linux 3.5.0-34-generic
Copyright 2011-2013

OpenCOR is a cross-platform CellML-based modelling environment which can be
used to organise, edit, simulate and analyse CellML files.

Plugins

$ ./OpenCOR -p
The following plugin is loaded:
 - CellMLTools: a plugin to access various CellML-related tools.

Command

$ ./OpenCOR -c help
Commands supported by CellMLTools:
 * Display the commands supported by CellMLTools:
      help
 * Export <in_file> to <out_file> using <format> as the destination format:
      export <in_file> <out_file> <format>
   <format> can take one of the following values:
      cellml_1_0: to export a CellML 1.1 file to CellML 1.0
$ ./OpenCOR -c CellMLTools::export in.cellml out.cellml cellml_1_0

Graphical User Interface (GUI)

OpenCOR offers a consistent GUI across the different platforms it supports. The look and feel of the interface is determined by the plugins which are selected. The first time you run OpenCOR, it will look something like this:

Default looking OpenCOR

There is a central area which is used to interact with files. By default, no file is opened, hence the OpenCOR logo is shown instead. To the sides, there are dockable windows which each provide additional features. Those windows can be dragged and dropped to the top or bottom of the central area:

Use of all docking areas

Alternatively, they can be undocked:

Undocked window

Or even closed, either by directly closing the window itself or by unticking the corresponding menu item (under the <code>View</code> menu, or the <code>Help</code> menu for the Help window):

Showing/hiding windows

To unselect all the plugins will result in OpenCOR looking 'empty':

Empty looking OpenCOR

The GUI version of OpenCOR relies on a plugin approach:

CellML annotation view plugin

The CellMLAnnotationView plugin can be used to annotate CellML files. If you open a CellML file which does not contain any annotation, then it will look something like this:

CellMLAnnotationView plugin: default view

All the CellML elements which can be annotated are listed to the left of the view. If you right click on any of them, you will get a popup menu which you can use to expand/collapse all the child nodes, as well as remove the metadata associated with the current CellML element or the whole CellML file:

CellMLAnnotationView plugin: context menu

Annotate a CellML element

Say that you want to annotate the sodium_channel component. First, you need to select it:

CellMLAnnotationView plugin: select a CellML element

Next, you need to specify a BioModels.net qualifier. If you do not know which one to use, click on the applications-internet button to get some information about the current BioModels.net qualifier:

CellMLAnnotationView plugin: select a BioModels.net qualifier

From there, go through the list of BioModels.net qualifiers until you find the one you are happy with. Here, we will use bio:isVersionOf:

CellMLAnnotationView plugin: select bio:isVersionOf as a qualifier

Now, we need to retrieve some possible ontological terms to describe our sodium_channel component. For this, we must enter a search term which in our case is going to be sodium. OpenCOR is then going to use the RESTful service from SemanticSBML to provide us with a list of, here, 25 possible ontological terms:

CellMLAnnotationView plugin: list of possible ontological terms

A quick look through the list tells us that we probably want to use the ChEBI term which identifier is 29101. If you want to know more about the ChEBI resource, you can click on its corresponding link:

CellMLAnnotationView plugin: look up some resource information

Similarly, if you want to know more about the ChEBI identifier:

CellMLAnnotationView plugin: look up some identifier information

Now that you are happy with your choice of ontological term, you can associate it with the sodium_channel component by clicking on its corresponding list-add button:

CellMLAnnotationView plugin: associate an ontological term with a CellML element

As you will have seen, the ontological term you have just added cannot be added anymore, but it can be removed by clicking on its corresponding list-remove button or by using the context menu (see above).

Now, say that you also want to add the next ontological term. You can obviously do so by clicking on the corresponding list-add button, but you could also enter pubchem.substance/4541 (i.e. <resource>/<id>) in the term field. Indeed, OpenCOR will recognise this 'term' as being a valid ontological term and will offer you to add it directly:

CellMLAnnotationView plugin: directly associate an ontological term with a CellML element

From there, if you were to decide that the last ontological term is not suitable, then you can remove it by clicking on its corresponding list-remove button:

CellMLAnnotationView plugin: remove an ontological term from a CellML element

Unrecognised annotations

Annotations consist of RDF triples which are made of a subject, a predicate and an object. OpenCOR recognises RDF triples which subject identifies a CellML element while it expects the predicate to be a BioModels.net qualifier and the object an ontological term.

Ontological terms used to be identified using MIRIAM URNs, but these have now been deprecated in favour of identifiers.org URIs. OpenCOR recognises both, but it will only serialise annotations using identifiers.org URIs.

Now, it may happen that a file contains annotations which do not fit OpenCOR's current requirements. In this case, OpenCOR will display the annotations as a simple list of RDF triples:

CellMLAnnotationView plugin: unrecognised annotations

If you ever come across a type of annotations which you think OpenCOR ought to recognise, but does not, then please do contact us.

CellML model repository plugin

The CellMLModelRepository plugin offers an interface to the CellML Model Repository. By default, it lists all the CellML models found in the repository:

CellMLModelRepository plugin: default view

The list can then be filtered. For example, if you enter Noble as a filter, you will get:

CellMLModelRepository plugin: filtered list of CellML files

To click on any of the listed links will open the workspace for that model in your (default) web browser. For example, to click on the Noble, 1962 link will get you here. From there, you can get access to the latest exposure for that model which, in the present case, can be found here.

CellML tools plugin

The CellMLTools plugin consists of various CellML-related tools which can be accessed through the Tools menu.

CellML File Export To...

These tools can be used to export a CellML model to various formats:

File browser plugin

The FileBrowser plugin offers a convenient way to access your physical files, remembering the folder or file which was selected when you last ran OpenCOR. By default, it will start with your home directory:

FileBrowser plugin: default view

As you would expect, to double click on a folder will expand its contents, as can be seen by double clicking on the Windows directory:

FileBrowser plugin: double clicking on a folder

On the other hand, to double click on a file will result in it being opened in OpenCOR. The rendering of the file will depend on the current view being selected. In the case of the CellML annotation view, it will look as follows:

FileBrowser plugin: double clicking on a file

Folders and files can also be dragged from the File Browser window and dropped onto the file organiser window.

Tool bar

user-home Go to the home folder
go-up Go to the parent folder
go-previous Go to the previous folder or file
go-next Go to the next folder or file

File organiser plugin

The FileOrganiser plugin allows you to organise your files virtually, i.e. independently of where they are physically located. Your virtual environment is remembered from one session to another and is, by default, empty:

FileOrganiser plugin: default view

Now, say that you are working on a specific project. You might then want to create a (virtual) folder which contains (a reference to) all the files you need for your project. To go about this, you first need to click on the folder-new button in the toolbar (or use the context menu). This will add a folder to your virtual environment:

FileOrganiser plugin: create a folder

You can rename the folder as you wish and create other (sub-)folders, if needed:

FileOrganiser plugin: create several (sub-)folders

You can move the (sub-)folders around by dragging and dropping them within your virtual environment, or delete an existing (sub-)folder by clicking on the edit-delete button in the toolbar (or by using the context menu):

FileOrganiser plugin: move/delete (sub-)folders

Next, you might want to open the file browser window, so you can start dragging and dropping files into your virtual environment (alternatively, you can use your system's file manager):

FileOrganiser plugin: add files

As for folders, you can move and delete your (virtual) files:

FileBrowser plugin: default view

Tool bar

folder-new Create a new folder
edit-delete Delete the current folder(s) and/or link(s) to the current file(s)

Help plugin

The Help plugin provides some user documentation which looks as follows:

Help plugin: default view

The documentation includes a menu which gets shown whenever you move your mouse pointer over the information icon (top right):

Help plugin: context menu

The Help plugin also displays special links which when clicked send a command to OpenCOR. For example, open the documentation for the Help plugin page in OpenCOR. Now, if you check the bold text below, you will see that its contents is slightly different, depending on whether you are reading this in OpenCOR or from here:

To open the About box, select the Help | About... menu...

Tool bar

go-home Go to the home page
go-previous Go back
go-next Go forward
edit-copy Copy the selection to the clipboard
zoom-original Reset the size of the help page contents
zoom-in Zoom in the help page contents
zoom-out Zoom out the help page contents
document-print Print the help page contents

Single cell view plugin

The SingleCellView plugin can be used to run CellML models which consists of either a system of ordinary differential equations (ODEs) or differential algebraic equations (DAEs). The system may be non-linear.

Open a CellML file

Upon opening a CellML file, OpenCOR will check that it can be used for simulation. If it cannot, then a message will describe the issue:

SingleCellView plugin: invalid CellML file

Alternatively, if the CellML file is valid, then the view will look as follows:

SingleCellView plugin: valid CellML file

The view consists of two main parts, the first of which allows you to customise the simulation, the solver and the model parameters. The second part is used to plot simulation data. In the parameters section, each model parameter has an icon associated with it to highlight its type:

constant (Editable) constant
computedConstant Computed constant
state (Editable) state
rate Rate
algebraic Algebraic

Simulate an ODE model

To simulate a model, you need to provide some information about the simulation itself, i.e. its starting point, ending point and point interval. Then, you need to specify the solver that you want to use. The solvers available to you will depend on which solver plugins you selected, as well as on the type of your model (i.e. ODE or DAE). In the present case, we are dealing with an ODE model and all the solver plugins are selected, so OpenCOR offers CVODE, forward Euler, Heun, Midpoint, and second- and fourth-order Runge-Kutta as possible solvers for our model.

SingleCellView plugin: ODE solvers

Each solver comes with its own set of properties which you can customise. For example, if we select Euler (forward) as our solver, then we can customise its Step property:

SingleCellView plugin: Forward Euler solver

At this stage, we can run our model by pressing the F9 key or by clicking on the media-playback-start button. Then, or before, we can decide on a model parameter to be plotted against the variable of integration (which, here, is time since the simulation properties are expressed in milliseconds). All the model parameters are listed to the left of the view, grouped by components in which they were originally defined. To select a model parameter, click on its corresponding check box:

SingleCellView plugin: failed simulation

As can be seen, the simulation failed. Several model parameters, including the one we selected, have a nan value (i.e. not a number). In this case, it is because the solver was not properly set up. Its Step property is too big. If we set it to 0.01 milliseconds, reset the model parameters (by clicking on the view-refresh button), and restart the simulation, then we get the following trace:

SingleCellView plugin: successful simulation

The (roughly) same trace can also be obtained using CVODE as our solver:

SingleCellView plugin: CVODE solver

However, the simulation is so quick to run that we do not get a chance to see the progress of the simulation. Between the view-refresh and text-csv buttons, there is a wheel which we can use to add a short delay between the output of two data points. Here, we set the delay to 13 ms. This allows us to rerun the simulation, after having reset the model parameters, and pause it at a point of interest:

SingleCellView plugin: pausing a simulation

Now, we can modify any of the model parameters identified by either the state or constant icon, but let us just modify g_Na_max (under the sodium_channel component) by setting its value to 0 milliS_per_cm2. Then, we resume the simulation and we can see the effect on the model:

SingleCellView plugin: resuming a simulation

If we want, we could export all the simulation data to a comma-separated values (CSV) file. To do so, we need to click on the text-csv button.

Simulate a DAE model

To simulate a DAE model is similar to simulating an ODE model, except that OpenCOR only offers one DAE solver (IDA) at this stage:

SingleCellView plugin: simulate a DAE model

Simulate a CellML 1.1 model

So far, we have only simulated CellML 1.0 models, but we can also simulate CellML 1.1 models, i.e. models which import units and/or components from other models:

SingleCellView plugin: simulate a CellML 1.1 model

Simulate several models at the same time

Each simulation is run in its own thread which means that several simulations can be run at the same time. Simulations running in the 'background' display a small progress bar in the top tab bar while the 'foreground' simulation uses the main progress bar at the bottom of the view:

SingleCellView plugin: simulate several models at once

Plotting area

The plotting area offers several features which can be activated by:

  • Zooming in:
    • holding the right mouse button down, and moving the mouse to the right/bottom to zoom in on the X/Y axis; or
    • moving the mouse wheel up.
  • Zooming out:
    • holding the right mouse button down, and moving the mouse to the left/top to zoom out on the X/Y axis; or
    • moving the mouse wheel down.
  • Zooming into a region of interest:
    • pressing Ctrl and holding the right mouse button down, and moving the mouse around.
  • Panning:
    • holding the left mouse button down, and moving the mouse around (this obviously requires the plotting area to having been zoomed in in the first place).
  • Coordinates of any point:
    • pressing Shift and holding the left mouse button down, and moving the mouse around.
  • Copying the contents of the plotting area to the clipboard:
    • double-clicking the left mouse button.

Tool bar

media-playback-start Run the simulation
media-playback-pause Pause the simulation
media-playback-stop Stop the simulation
view-refresh Reset all the model parameters
text-csv Export the simulation data to CSV

Supported platforms

OpenCOR can be used on the following versions of Windows, Linux and OS X.

Windows

OpenCOR is supported on the 32-bit and 64-bit versions of Windows XP, Windows Vista, Windows 7 and Windows 8.

Linux

OpenCOR is supported on the 32-bit and 64-bit versions of Ubuntu 12.04 LTS (Precise Pangolin), 12.10 (Quantal Quetzal) and 13.04 (Raring Ringtail), but it might also work with earlier versions of Ubuntu, as well as some other Linux distributions, though additional system libraries might be needed for the latter.

OS X

OpenCOR is supported on OS X 10.8 (Mountain Lion).

Plugin approach

OpenCOR is a plugin-based application. This means that if no plugins are selected, then OpenCOR can do next to nothing.

As can be seen by opening the Plugins dialog box (by selecting the Tools | Plugins... menu) and by unselecting Show only selectable plugins (if necessary), OpenCOR supports different types of plugins (Organisation, Editing, Simulation, Miscellaneous, API and Third-party; see below):

Plugins window

You can select which plugins you want to use. However, plugins which are needed by other plugins (e.g. the Core plugin is needed by the CellML model repository plugin) cannot be directly selected. Instead, they will be automatically selected if and only if they are needed by at least one other plugin.

Most of the selectable plugins come with some kind of a GUI:

Organisation

Organisation plugins are used to search, open, organise, etc. your files:

  • CellMLModelRepository: a plugin to access the CellML model repository.
  • FileBrowser: a plugin to access your local files.
  • FileOrganiser: a plugin to virtually organise your files.

Editing

Editing plugins are used to edit part or all of your files using one of several possible views:

  • CellMLAnnotationView: a plugin to annotate CellML files.

There are also some non-selectable Editing plugins:

  • CoreEditing: the core editing plugin.
  • CoreCellMLEditing: the core CellML editing plugin.

Simulation

Simulation plugins are used to simulate your files:

  • CVODESolver: a plugin which uses CVODE to solve ODEs.
  • ForwardEulerSolver: a plugin which implements the Forward Euler method to solve ODEs.
  • FourthOrderRungeKuttaSolver: a plugin which implements the fourth-order Runge-Kutta method to solve ODEs.
  • HeunSolver: a plugin which implements the Heun method to solve ODEs.
  • MidpointSolver: a plugin which implements the Midpoint method to solve ODEs.
  • IDASolver: a plugin which uses IDA to solve DAEs.
  • KINSOLSolver: a plugin which uses KINSOL to solve non-linear algebraic systems.
  • SecondOrderRungeKuttaSolver: a plugin which implements the second-order Runge-Kutta method to solve ODEs.
  • SingleCellView: a plugin to run single cell simulations.

There is also a non-selectable Simulation plugin:

  • CoreSolver: the core solver plugin.

Miscellaneous

Miscellaneous plugins are used for various purposes:

  • Help: a plugin to provide help.

There are also some non-selectable Miscellaneous plugins:

  • Core: the core plugin.
  • Compiler: a plugin to support code compilation.
  • CellMLSupport: a plugin to support CellML.
  • CellMLTools: a plugin to access various CellML-related tools.

API

(Non-selectable) API plugins are used to provide access to external APIs:

Third-party

(Non-selectable) third-party plugins are used to provide access to third-party libraries:

  • LLVM: a plugin to access LLVM (as well as Clang).
  • SUNDIALS: a plugin to access CVODE, IDA and KINSOL from the SUNDIALS library.
  • Qwt: a plugin to access Qwt.

MAP Client

The MAP Client is a cross-platform framework for managing workflows. MAP Client is a plugin-based application that can be used to create workflows from a collection of workflow steps.

The MAP Client is an application written in Python and based on Qt the cross-platform application and UI framework. Further details are available in the documents listed below.

MAP Installation and Setup Guide

This document describes how to install and setup the MAP software for use on your machine. The MAP software is a Python application that uses the PySide Qt library bindings. The instructions in this document cover the installation and setup on a Windows based operating system. The instructions for GNU/Linux and OS X are similar and should be extrapolated from these instructions. There are some side notes for these other operating systems to help, but not full or dedicated instructions. If for any reason you get stuck and cannot complete the instructions please contact us.

MAP

The MAP framework is written in Python and is designed to work with Python 2 and Python 3. The MAP application is tested against Python2.6, Python2.7 and Python3.3 and should work with any of these Python libraries. Currently the MAP framework is not packaged as an application, requiring the user to set up the environment prior to launching the mapclient.py executable Python script.

The MAP application consists of the framework and various tools, by itself it can do very little. It is the job of the plugins to provide functionality. The MAP application as referred to in this section of the instructions may be described as the barebones application for this reason.

To execute the barebones application we need to first install some dependencies:

  1. Python (and make sure to add the Python and Python\Scripts folders to your system PATH).
  2. PySide (PySide and PyQt4 are virtually interchangeable but currently this would require some textual changes)
  3. Python setup tools and then using easy_install.exe to install:
  1. Requests Python library (easy_install requests)
  2. OAuthlib Python library (not the OAuth Python library) (easy_install oauthlib).

Also, if we wish to interact with the Physiome Model Repository (PMR) we need:

We can now install the barebones MAP client application. The barebones application can be launched via the command window with the following command in the extracted mapclient/src folder:

mapclient.py

which should result in an application window similar to that shown below.

_images/mapClientBarebones.png

Now that the barebones MAP application is installed and running we can move on to some useful plugins.

MAP Plugins

The installation of MAP plugins simply requires obtaining the plugins and then using the MAP plugin manager to let the MAP client know where to look for plugins. Furthermore, there is a github project which is used to provide a common collection of MAP plugins. For the purposes of this tutorial, the autosegmentationstep plugin will be used. You can download a copy of the plugin, extract it, and then follow the instructions for adding the folder in which you extracted the plugin to the MAP plugin manager.

Zinc and PyZinc

Zinc is an advanced field manipulation and visualisation library and PyZinc provides Python bindings to the Zinc library. Binaries are available for download for Linux, Windows, and OS X. The MAP client is able to make use of Zinc for advanced visualisation and image processing steps. To get PyZinc installed, follow these steps:

  1. Install Zinc using either: the Windows installer (ensuring that you enable the option for the installer to add Zinc to the system PATH); or unzip the archive and manually copy library file to somewhere on your PATH (which could include the PyZinc installation folder).
  2. Unzip the downloaded PyZinc archive.
  3. In a command window, change into folder PyZinc extracted into.
  4. Execute the following command: python setup.py install (this uses a similar mechanism as the easy_instal software above..

You can check that you have Zinc and PyZinc correctly installed and functional by running the volume_fitting.py application provided with the tutorial materials. If Zinc and PyZinc are working you should get an application window similar to that shown below with the interactive three-dimensional model viewer shown. Note you will need to restart the command window after installing PyZinc in order to refresh the system PATH.

_images/volumeFitting.png
Which Binary?

There are a number of binaries available for any given platform and you must match the package description with your system setup. The package description contains the package name, package version, package architecture, package operating system and in the case of PyZinc the package Python version. The package extension indicates the type of package and they come in two main flavours: installer/package manager; archive.

Additionally the version of the PyZinc binaries you download must match the version of the Zinc library binaries.

MAP Features Demonstration

Section author: Hugh Sorby

Note

MAP is currently under active development, and this document will be updated to reflect any changes to the software or new features that are added. You can follow the development of MAP at the launchpad project.

This document details the features of MAP, a cross-platform framework for managing workflows. MAP is a plugin-based application that can be used to create workflows from a collection of workflow steps.

In this demonstration is based on version 0.9.0 of MAP, available from the project downloads. Directions for installing MAP and getting the MAP plugins are available in the MAP Installation and Setup Guide.

In this demonstration we will cover the features of MAP. We will start with a quick tour and then create a new workflow that will help us segment a region of interest from a stack of images.

Quick Tour

When you first load MAP, it will look something like this:

_images/blank_MAP_1.png

In the main window we can see three distinct areas that make up the workflow management side of the software. These three areas are the menu bar (at the top), the step box (on the left) that contains the steps that you can use to create your workflow and the workflow canvas (on the right) an area for constructing a workflow.

In the Step box we will only see two steps, this is because we have only loaded the default Steps and not loaded any of the external plugins that MAP can use.

Step Box

The Step box provides a selection of steps that are available to construct a workflow from. The first time we start the program only the default plugins are available. To add more steps we can use the Plugin Manager tool. To use a step in our workflow we drag the desired step from the step box onto the workflow canvas.

Workflow canvas

The workflow canvas is where we construct our workflow. We do this by adding the steps to the workflow canvas from the step box that make up our workflow. We then make connections between the workflow steps to define the complete workflow.

When a step is added to the workflow the icon which is visible in the Step box is augmented with visualisations of the Steps ports and the steps configured status. The annotation of the steps ports will show when the mouse is hovered over a port. The image below shows the Image Source step with the annotation for the port displayed.

_images/step_with_port_info_displayed_1.png

Tools

MAP currently has three tools that may be used to aide the management of the workflow. They are the Plugin Manager tool, the Physiome Model Repository (PMR) tool and the Annotation tool. For a description of each tool see the relevant sections.

Plugin Manager Tool

The plugin tool is a simple tool that enables the user to add or remove additional plugin directories. MAP comes with some default plugins which the user can decide to load or not. External directories are added with the add directory button. Directories are removed by selecting the required directory in the Plugin directories list and clicking the remove directory button.

Whilst additions to the plugin path will be visible immediately in the Step box deletions will not be apparent until the next time the MAP Client is started. This behaviour is a side-effect of the Python programming language.

_images/plugin_manager_1.png
Physiome Model Repository (PMR) Tool

The PMR tool uses webservices and OAuth to communicate between itself (the consumer) and the PMR website (the server). Using this tool we can search for and find suitable resources on PMR.

The PMR website uses OAuth to authenticate a consumer and determine consumer access privileges. Here we will discuss the parts of OAuth that are relevant to getting you (the user) able to access resources on PMR.

In OAuth we have three players the server, the consumer and the user. The server is providing a service that the consumer wishes to use. It is up to the user to allow the consumer access to the servers resources and set the level of access to the resource. For the the consumer to access privileged information of the user stored on the server the user must register the consumer with the server, this is done by the user giving the consumer a temporary access token. This temporary access token is then used by the consumer to finalise the transaction and acquire a permanent access token. The user can deny the consumer access at anytime by logging into the server and revoking the permanent access token.

If you want the PMR tool to have access to privileged information (your non-public workspaces stored on PMR) you will need to register the PMR tool with the PMR website. We do this by clicking on the register link as shown in the figure below. This does two things: it shows the Application Authorisation dialog; opens a webbrowser at the PMR website. [If you are not logged on at the PMR website you will need to do so now to continue, instructions on obtaining a PMR account are availble here]. On the PMR website you are asked to either accept or deny access to the PMR tool. If you allow access then the website will display a temporary access token that you will need to copy and paste into the Application Authorisation dialog so that the PMR tool can get the permanent access token.

_images/PMRTool_1.png
Annotation Tool

The Annotation tool is a very simple tool to help a user annotate the Workflow itself and the Step data directories that are linked to PMR. At this stage there is a limited vocabulary that the Annotation tool knows about, but this is intended to be extended in coming releases. The vocabulary that the annotation is aware of is available in the three combo-boxes near the top of the dialog.

_images/top_annotation_1.png

The main part of the Annotation tool shows the current annotation from the current target.

_images/main_annotation_1.png

In the above image we can see the list of annotations that have been added to the current target. This is a simplified view of the annotation with the prefix of the terms removed for clarity.

MAP Tutorial - Create Workflow

Section author: Hugh Sorby

Note

MAP is currently under active development, and this document will be updated to reflect any changes to the software or new features that are added. You can follow the development of MAP at the launchpad project.

This document details takes the reader through the process of creating a workflow from existing MAP plugins. Having a read through the MAP Features Demonstration is a good way to become familiar with the features of the MAP application.

Getting Started

To get started with MAP we need to create a new workflow. To do this we use File ‣ New ‣ Workflow menu option (Ctrl-N shortcut). This option will present the user with a directory selection dialog. Use the dialog to select a directory where the workflow can be saved. Once we have chosen a directory the step box and workflow canvas will become enabled.

To create a meaningful workflow we will need to use some external plugins. To load these plugins we will use the Plugin Manager tool. The Plugin Manager tool can be found under the Tools menu. Use the Plugin Manager to add the directory location of the MAP Plugins. After confirming the changes to the Plugin Manager you should see a few new additions to the Step box.

Creating the Workflow

To create a workflow we use Drag 'n' Drop to drag steps from the Step box and drop the step onto the workflow canvas. When steps are first dropped onto the canvas they show a red cog icon to indicate that the step is not configured. At a minimum a step requires an identifier to be set before it can be used.

Drag the steps Image Source, Data Store and Automatic Segmenter onto the workflow canvas. All the steps will show a red cog, except the 'Automatic Segmenter', step this indicates that the step needs to be configured. To configure a step we can either right click on the step to bring up a context menu from which the configure action can be chosen or simply click the red cog directly. See the relevant section for the configuration of a particular step.

Note

When configuring a step you are asked to set an identifier. The identifier you set must unique within the workflow and it must not start with a '.'.

Configuring the Image Source Step

The Image Source step requires a location. This location contains the images to import. The location may be a directory on the local hard disk or a workspace on PMR. Here we will show how to configure the Image Source step with images that have been stored in a workspace on PMR.

First each step requires a unique id. The id is used to create a file containing the step configuration information. This id for the Image Source step is used to create a directory under the workflow project directory. In the identifier edit box enter a directory name. Once a valid identifier is entered the red highlight around the edit box will be turned off.

Next change to the PMR tab and we will see an ellipses button for bringing up the PMR tool dialog. You need to register the PMR tool to access certain webservices the details on how to do this are available here. The remainder of this tutorial will assume you have setup access to PMR properly. In the search box of the PMR dialog we need to enter the search term 'blood-vessels'. The result of the search should look like the image below.

_images/PMRTool_2.png

Select this entry in the search listing and click 'Ok'. The selected PMR workspace will be downloaded in the background. When the download is finished the red cog icon will disappear. If the download is not successful a dialog will appear to inform you of the error.

MAP is not setup to work with streamed resources so we must download the workspace from PMR.

Configuring the Point Cloud Step

Configuring the Point Cloud step is trivial at this time. This is because the step only requires an identifier to be set. The identifier will be used to create a directory where the received point cloud will be serialized.

Executing the Workflow

At this point you should have a workflow area looking like this:

_images/configured_MAP_1.png

Once the All the steps in the workflow are configured (no more red cog icons) we can make connections between the steps. To make a connection between two steps the first step must provide what the second step uses. When trying to connect two steps that cannot be connected you will see a no entry icon over the connection for a short period of time and then the connection will be removed. The following image shows an incorrect connection trying to be made.

_images/error_connection.png

If the mouse is hovered over a port you will see a description of what the port provides or uses. To make a connection click on a port and drag the mouse to the port to be connected.

To execute the workflow we need to connect up the steps in the correct manner and save the workflow. The workflow should be connected up as can be seen in the following image.

_images/connected_MAP_1.png

Once the workflow has been saved the execute button in the lower left corner should become enabled. Clicking the execute button will, naturally enough, execute the workflow step by step.

Note

We can make connections between steps at anytime not just when all steps have been properly configured.

Automatic Segmenter Step

The 'Automatic Segmenter' actually allows us to interact with executing workflow. With this step we can move the image plane up and down and change the visibility of the graphical items in the scene. The image plane is moved through the use of the slider on the left hand side. The visibility of the graphical items is controlled by checking or unchecking the relevant check boxes.

MAP Plugins

The Plugin lies at the heart of the MAP framework. The key idea behind the plugins is to make them as simple as possible to implement. The interface is defined in documentation and the plugin developer is expected to adhere to it. The framework leaves the responsibility of conforming to the plugin interface up to the plugin developer. The plugin framework is based on Marty Alchin's [1] article on a plugin framework for Django. The plugin framework is very lightweight and requires no external libraries and can be made to work with Python 2 and Python 3 simultaneously.

Workflow Step

The Workflow Step is the basic item that a plugin developers need to place their software within. A workflow step can be of any size and complexity. Although it must adhere to the plugin design to work properly with the application. Every step that wishes to act like a Workflow Step must derive itself from the Workflow step mountpoint. The Workflow step mountpoint is the interface between the application and the plugin. The Workflow step mountpoint can be imported like so:

from mountpoints.workflowstep import WorkflowStepMountPoint

A skeleton step is provided as a starting point for the developer to create their own workflow steps. The skeleton step is actually a valid step in its own right and it will show up in the Step box if enabled. However the skeleton step has no use other than as an item to drag around on the workflow area. The skeleton step is discussed below, first however the plugin interface is discussed.

Plugin Interface

The plugin interface is the layer between the application and the developers plugin. The plugin interface is not defined by contract as we so often see in Java. But rather the plugin interface is defined by documentation. This puts the burden of the specification on the documentation and the conformity of the specification on the developer. The underlying theory is that the developer is able to follow the specification without the application having to do rigourous checks to make sure this is the case. The phrase 'If it walks like a duck' is often used.

In this section the specification of the Workflow step plugin interface is given. It is then upto the developer to make sure their plugin behaves like one.

The details of the plugin interface are provided in the documentation of the source code in the relevant source file and additionally here for easy reference. The documentation provided with the source code is very direct with little explanation the following documentation provides a bit more explanation and discussion on the various aspects of the plugin interface. The documentation provided here should be considered the slave documentation and the documentation provided with the source code as the master documentation.

There are essentially, what may be considered, three different levels of the plugin design.

  1. The Musts
  2. The Shoulds
  3. The Coulds

Creating a workflow step that satisifies the musts will create an actual workflow step that can be added to the workflow area and interacted with. But it won't be very useful. Satisfying the shoulds will usually be sufficient for the very simplest of steps. Simple steps are for instance ones that provide images, or location information for data. Doing some of the coulds will create a much more interesting step.

The requirements for creating a step have been kept as simple as possible, this is to allow the developer a quick route into the development of the step content.

The following three sections discuss these three levels in more detail.

A Step Must
  • The plugin must be derived from the WorkflowStepMountPoint class defined in the package mountpoints.workflowstep

  • Accept a single parameter in it's __init__ method.

  • Define a name for itself, this must be passed into the initialisation of the base class.

  • Define the methods

    def configure(self):
        pass
    
    def getIdentifier(self):
        pass
    
    def setIdentifier(self, identifier):
        pass
    
    def serialize(self, location):
        pass
    
    def deserialize(self, location):
        pass
    
A Step Should
  • Implement the configure method to configure the step. This is typically in the form of a dialog. When implementing this function the class method self._configureObserver() should be called to inform the application that the step configuration has finished.
  • Implement the getIdentifier/setIdentifier methods to return the identifier of the step.
  • Implement the serialize/deserialize methods. The steps should serialize and deserialize from a file on disk located at the given location.
  • Define a class attribute _icon. That is of the type QtGui.QImage.
  • Information about what the step uses and/or what it provides. This is achieved through defining ports on the step.
A Step Could
  • Implement the method 'portOutput(self)' if it was providing some information to another step.
  • Implement the method 'execute(self, dataIn)' if it uses some information from another step. If a step implements the 'execute(self, dataIn)' method then it must call '_doneExecution()' when the step is finished.
  • Define a category using the '_category' attribute. This attribute will add the step to the named category in the step box, or it will create the named category if it is not present.

Ports

A port is a device to specify what a workflow step provides or uses. A port is described using Resource Description Framework (RDF) triples. The port description is used to determine whether or not two ports may be connected together. One port can either use or provide one thing. A single port must not both provide a thing and use a thing. Ports are ordered by entry position.

Ports are added by using the 'addPort(self, triple)' method from the base class.

Skeleton Step

The skeleton step satisfies the musts of the plugin interface. It is a minimal step and it is set out as follows.

A Python package with the step name is created, in this case 'skeletonstep', in the module file we add the code that needs to be read when the plugins are loaded.

The module file performs four functions. It contains the version information and the authors name of the module. For instance the skeleton step has a version of '0.1.0' and authors name of 'Xxxx Yyyyy'. It adds the current directory into the Python path, this is done so that the steps python files know where they are in relation to the python path. It also (optionally) prints out a message showing that the plugin has been loaded successfully. But the most important function it performs is to call the python file that contains the class that derives from the workflow step mountpoint.

The 'SkeletonStep' class in the skeletonstep.step package is a very simple class. It derives from the 'WorkflowStepMountPoint', calls the base class with the name of the step, accepts a single parameter in it's init method and defines the five required functions to satisfy the plugin interface.

When enabled the skeleton step will be a fully functioning step in the MAP Client.

References

[1] http://martyalchin.com/2008/jan/10/simple-plugin-framework/ Marty Alchin on January 10, 2008

MAP Tutorial - Create Plugin

Section author: Hugh Sorby

Note

MAP is currently under active development, and this document will be updated to reflect any changes to the software or new features that are added. You can follow the development of MAP at the launchpad project.

This document details takes the reader through the process of creating a new plugin for the MAP Client. The MAP Plugins document defines the plugin interface that the new plugin must adhere to.

A Simple Source Step Example

We need to create a source step for supplying Zinc model files.

First copy the skeletonstep directory to another directory. To make this step our own we first change the skeletonstep name to zincmodelsourcestep. The places we have to change are:

  1. The topmost directory

  2. The inner directory, this directory is used to namespace our new step.

  3. In __init__.py file in the topmost directory, we also need to uncomment the lines:

    from zincmodelsourcestep import step print("Plugin '{0}' version {1} by {2} loaded".format(tail, __version__, __author__))

  1. In __init__.py file in the inner directory. We have to change the name of the class to 'ZincModelSourceStep' and change the name of the step to 'Zinc Model Source'.

Now we need to be able to configure the step. To do this we can use qt-designer to create a 'configuredialog.ui' file that we can convert into Python code using 'pyside-uic'.

Developing the Virtual Physiological Human

This tutorial will demonstrate tools, techniques and best practices that aid scientists in the development and application of computational models and simulation experiments in their work toward the creation of a virtual physiological human. The Physiome Model Repository (PMR) provides a framework for the storage, curation and exchange of data. By using standards suitable to their data, scientists maximise their ability to reuse existing knowledge and enable others to make use of their achievements in novel work. Annotations ensure scientists are able to find existing data and are also able to correctly interpret and apply their own data. These tutorials are designed to help demonstrate and promote practices which will aid attendees in their own work. Attendees are encouraged to raise issues specifically related to their needs with the tutors.

Documentation for the software used in this tutorial is available online, including the most recent version of this tutorial itself. This tutorial guides the particpant through three common computational modelling scenarios faced by scientists working toward the virtual physiologial human. We use these scenarios to achieve scientific outputs using the covered tools tools and demonstrating practices we believe will help ensure reproducible and reusable science. Each of the scenarios listed below should be worked through in order and in each scenario we provide examples using either OpenCOR or the MAP workflow tool.

When interacting directly with Mercurial, this tutorial demonstrates how to work with the Physiome Model Repository using TortoiseHg, which provides a Windows explorer integrated system for working with Mercurial repositories.

Brief mention of the equivalent command line versions of the TortoiseHg
actions will also be mentioned, so that these ideas can also be used without
a graphical client, and on Linux or OS X and similar systems. These will be denoted
by boxes like this.

This tutorial requires you to have:

Creating a new piece of work

In the Physiome Model Repository (PMR), a complete piece of work is stored in a workspace. Each workspace is a Mercurial repository, which allows PMR to maintain a complete history of all changes made to every file it contains. In this tutorial, we take you through the creation of a new piece of work which will be stored in a PMR workspace:

Working with the repository using Mercurial

This part of the tutorial will teach you how to create a workspace in the repository, clone the workspace from the model repository using a Mercurial client, add content to the workspace, and then push the cloned workspace to the repository.

Registering an account and logging in

First, navigate to the teaching instance of the Physiome Model Repository at http://teaching.physiomeproject.org/.

Note

The teaching instance of the repository is a mirror of the main repository site found at http://models.physiomeproject.org/, running the latest development version of the PMR Software.

Any changes you make to the contents of the teaching instance are not permanent, and will be overwritten with the contents of the main repository whenever the teaching instance is upgraded to a new PMR Software release. For this reason, you can feel free to experiment and make mistakes when pushing to the teaching instance. Please subscribe to the cellml-discussion mailing list to receive notifications of when the teaching instance will be refreshed.

See the section Migrating content to the main repository for instructions on how to migrate any content from the teaching instance to the main (permanent) Physiome Model Repository.

In order to make changes to models in the repository, you must first register for an account. If you already have an account on the main repository site, your account will also be on the teaching instance. Otherwise, you need to register for an account on the teaching repository. You can register by navigating to the Log in link at the top right of the menu bar and then looking for the New user section of the log in page.

Note

Your username and password are also the credentials you use to interact with the repository via Mercurial.

Once logged in to the repository, you will notice that there are a couple of new links in the navigation bar (My Workspaces and Documentation). The My Worskpaces link is where all the workspaces you create later on will be listed. The Log in link is also replaced by your username and a Log out link (which you can access by clicking on your username).

Mercurial username configuration

Important

Username setup for Mercurial

Since you are about to make changes, your name needs to be recorded as part of the workspace revision history. When you commit your changes using Mercurial, it is initially "offline" and independent of the central PMR Software instance. This means that you have to set-up your username for the Mercurial client software, even though you have registered a username on the PMR Software site.

You only need to do this once. The MAP PMR tool will help complete these details for you automatically, but it is a good idea to ensure sensible default values are configured, just in case.

Steps for TortoiseHg:

  • Right click on any file or folder in Windows Explorer, and select TortoiseHg ‣ Global Settings.
  • Select Commit and then enter your name followed by your e-mail address in "angle brackets" (i.e. less-than "<" and greater-than ">"). Actually, you can enter anything you want here, but this is the accepted best practice as your email address provides a globally unique identifier. Note that this information becomes visible publicly if the PMR Software instance that you push your changes to is public.

Steps for command line:

  • Edit the config text file:
    • For per repository settings, the file in the repository: <repo>\.hg\hgrc
    • System-wide settings for Linux / OS X: ~/.hgrc
    • System-wide settings for Windows: %USERPROFILE%\mercurial.ini
  • Add the following entry:

    [ui]
    username = Firstname Lastname <firstname.lastname@example.net>
    

A new CellML-based piece of work

In this section we are going to create a new workspace into which we will add a CellML model, annotate the model using OpenCOR, and simulate the model to check that it produces the expected results. We will be using the seminal Noble (1962) cardiac cellular electrophysiology model as the demonstration model for this part of the tutorial.

Create a new workspace

You can find instructions for creating a new workspace on the teaching instance repository in the PMR workspaces documentation. Following those instructions, create a workspace similar to that shown below:

_images/newWorkspace.png

Creating a new workspace to begin a scientific study based on the Noble 1962 cardiac cellular electrophysiology model.

Once you have created the workspace, you will be taken to the workspace listing page. Take particular note of the URI for mercurial clone/pull/push, as highlighted by the arrow below.

_images/emptyWorkspace.png

A view of the newly created and empty workspace. The URI to be used for Mercurial actions is highlighted by the arrow. Note: the workspace URI is unique to every workspace, so yours will be different to the one shown above.

In order to make changes to your workspace, you have to clone it to your own computer. In order to do this, copy the URI for mercurial clone/pull/push as shown above. In Windows explorer, find the folder where you want to create the clone of the workspace. Then right click to bring up the context menu, and select TortoiseHG ‣ Clone as shown below:

_images/PMR-tut1-tortoisehgclone.png

Paste the copied URL into the Source: area and then click the Clone button. This will create a folder named after the workspace identifier (a hexadecimal number) that will be empty. The folder will be created inside the folder in which you instigated the clone command.

Command line equivalent

hg clone [URI]

The repository will be cloned within the current directory of your command line window.

You will need to enter your username and password to clone the workspace, as the workspace will be set to private when it is created.

Populate with content

We have prepared a copy of the Noble (1962) model encoded in CellML ready for your use. You can download the model n62.cellml and save it into your cloned workspace folder created above. To verify that the model works, you can load it into the OpenCOR single cell view and simulate the model for 5000 ms. You can plot the variable V in the membrane component and you should see results as shown below:

_images/n62-initial-results.png

The arrows highlight the Ending point which should be set to 5000 ms and the variable V to be plotted.

As long as your results look similar to the above, everything is working as expected. Now is a good time to add the CellML model to the workspace record. The first step is to choose the TortoiseHG ‣ Add Files... option from the context menu for your workspace folder (1).

_images/addModel-1.png

This will bring up the hg add dialog box, showing the files which can be added (in this case only the n62.cellml file is available and it is selected by default). Clicking the Add button (2) will inform Mercurial that you want to add the selected file to the workspace.

_images/addModel-2.png

In Windows Explorer, you will see the file icon for the n62.cellml model now overlaid with the Mercurial + icon (3) to indicate that you have added the file but not yet committed it to the workspace.

_images/addModel-3.png

You can now commit the added file to the workspace by choosing Hg Commit... from the context menu in your workspace folder (4).

_images/addModel-4.png

This will bring up the commit dialog, which lets you explore and select all the possible changes in this workspace that you can commit. In this case, there is just the addition of the n62.cellml file to be committed. Before committing, a useful log message should be entered - this will help you keep track of the changes you make to the workspace and possibly the reasons for why a given set of changes were made (for example, due to feedback from reviewers). After entering the log message, click the Commit button to commit the changes (5). The dialog will stay visible in case you have further changes to commit, but in this case you can just close the dialog.

_images/addModel-5.png

Once you have successfully committed the change, you will see that the icon for the n62.cellml file has now changed to a green tick (6) to indicate that the file is up-to-date with no modifications.

_images/addModel-6.png

Command line equivalent

hg add n62.cellml
hg commit -m "Adding an initial copy of the Noble (1962) cardiac cellular electrophysiology model to the workspace."

While we have the model open in OpenCOR, we should have a go at annotating some of the variables in the model. Full instructions for this can be found in the OpenCOR CellML annotation view. First, we will follow the example given in those instructions for annotating the sodium_channel component.

The first step is to switch to the Editing mode (1) and select the sodium_channel component for annotation (2). We will be using the bio:isVersionOf as the qualifier for this annotation (3) and searching for terms related to sodium (4).

_images/INa-annotation-step1.png

We can then add desireable terms from the search results by choosing the + button beside the term to add to the annotations for the sodium_channel component (5).

_images/INa-annotation-step2.png

Have a play annotating other variables and components in the model. When done annotating, make sure to save the model (File ‣ Save). With the CellML model updated, now is a good time to commit the changes to the workspace.

As above, choose Hg Commit... from the context menu in your workspace folder to bring up the Mercurial commit dialog. This time you will see that there is one file modified that can be committed, n62.cellml (1). As we mentioned previously, it is important to enter a good log message to keep a record of the changes you make (2), and the changes made to the currently selected file are shown to help remind you as to your changes (3). In this case, OpenCOR has made many changes to the whitespace in the file, as well as adding the RDF annotations at the bottom of the file.

_images/commitAnnotations.png

Command line equivalent

hg diff
hg commit -m "Using OpenCOR to add some annotations to my copy of the Noble 1962 model."
Push back to the repository

Having added content and performed some modifications, it is time to push the changes back to the model repository, achieved in TortoiseHG with the synchronization action. First, select TortoiseHG ‣ Synchronize from the context menu for your workspace folder.

_images/synchronize-1.png

This will bring up the TortoiseHG Sync dialog. In this dialog, you will see that by default you will be synchronizing with the workspace on the teaching repository from which you originally created this clone. This is usually what you want to do, but it is possible to synchronize with other Mercurial repositories. In this case, we want to push the changes we have made to the model repository, so choose the corresponding action from the toolbar (highlighted below).

_images/synchronize-2.png

Once you choose the push action, you will be asked to confirm that you want to push to your remote repository and then asked for your username and password (these are the credentials you created when registering for an account in the model repository). You will then see a listing of the transaction as your changes are pushed to the repository and a message stating the push has completed.

Command line equivalent

hg push

If you now return to browsing your workspace in your web browser, and refresh the page, you will see that your workspace now has some content - n62.cellml - and if you view the workspace history you will see the log messages that you entered when committing your changes above.

_images/updatedWorkspace.png

Now might be a good time to think about sharing your workspace with your neighbors. You might also want to have a look at creating an exposure for your workspace. To learn how to create exposures, please refer to Creating CellML exposures.

A new image segmentation study

In this part of the tutorial, we use the MAP client software to create a new workflow which will take a set of images, from the model repository, and apply an automated image segmentation algorithm to them which will produce a data point cloud.

Before beginning this tutorial, you need to have the MAP client installed on your machine. Please follow the MAP Installation and Setup Guide.

The remainder of this tutorial can be found in the MAP documentation, MAP Tutorial - Create Workflow.

Best practice tips

Todo

Complete this section

Creating a new piece of work from scratch -> encouraging best practices!
  • create workspace, commit often, useful log messages
  • provenance data (making sure user name/ID is set correctly)
  • share directly with collaborators
  • annotation?
  • creating exposures? link through to PMR documentation...?

Reproducing published data

In the Physiome Model Repository (PMR), a complete piece of work is stored in a workspace. Each workspace is a Mercurial repository, which allows PMR to maintain a complete history of all changes made to every file it contains. In this tutorial, we take you through the process of reproducing an existing piece of "published" work - commonly, the first stage in establishing a new project which builds on previous discoveries.

Reproducing model behavior in OpenCOR

In this tutorial, we will be demonstrating how to reproduce the results from a CellML model as they were originally published. Because the Physiome Model Repository makes use of Mercurial, even if a workspace has continued being developed after a particular revision is published, we are able to step back through the workspace history to reproduce those original published results.

Following on from the previous tutorial, we make use of the Noble (1962) cardiac cellular electrophysiology model. In this tutorial, we will use the version of this model published in the CellML model repository and available here: http://models.cellml.org/e/174. If you navigate from that exposure to the workspace you can check the history as shown below.

_images/sourceHistory.png

As you can see highlighted in the Exposure column of the history above, there are two exposures for this workspace. For the purposes of this tutorial, we will assume that the earlier exposure corresponds to a study that has been published in a scientific journal. The later exposure is the result of further work on this model following the publication of the journal article. The later exposure illustrates the difference between these two versions of the model. In this tutorial, we aim to reproduce the results as shown in the published journal article - corresponding to the earlier exposure.

Important

It is essential to use a Mercurial client to obtain models from the repository for editing. The Mercurial client is not only able to keep track of all the changes you make (allowing you to back-track if you make any errors), but using a Mercurial client is the only way to add any changes you have made back into the repository.

Cloning an existing workspace

The first step is to clone the workspace containing the model we want to work with. The steps to clone a workspace were demonstrated in the previous tutorial. In summary:

  1. Copy the source URI for Mercurial clone/push/pull (i.e., http://models.cellml.org/w/andre/embc13-n62);
  2. Clone the repository (TortoiseHG ‣ Clone or hg clone [uri]) to a folder on your machine.
Check the model

Now that we have the model, we want to ensure that we are able to produce the current results that it should produce. Load the n62.cellml file in the newly cloned folder into OpenCOR and run a simulation for 5000 ms and plot the membrane potential, V. This should result in a similar graph to that shown in the upper figure of the exposure page, reproduced here for convenience.

_images/originalResults.png

Notice that in the 5000 ms simulation there are five action potentials.

Revert to an earlier version of the model

Now that we are happy the current version of the model reproduces the results that it should, we want to go back to the version of the model that was published in a journal article. This is commonly required because the new work you might want to do with the model will be based on the published model, not its latest version which may have deviated from the validated model which was published.

Using Mercurial, there are several methods by which you can jump around the history of a workspace. The particular method that works best depends a lot on what you want to do with the workspace once you change back to a revision that is not the most recent. Searching the internet for information on the Mercurial (hg) commands: revert, update, and branch; is probably a good place to start working out which is best for your situation. In this case, we have a fairly simple requirement to go back to the revision prior to the current one so that we can reproduce some simulation results. If we were actually going to do further development in this workspace, we would need a more elaborate solution than that described below.

Here, we need to update our local clone of the workspace to a state matching the published journal article. In order to do this, we need to find the appropriate revision identifier to use with our Mercurial client. We can find the revision identifier by navigating to the workspace history tab in the model and choosing the [files] link for the revision corresponding to the earlier exposure, shown below.

_images/sourceHistoryFilesLink.png

From the files page, you will see the required revision identifier as highlighted in the image below.

_images/sourceFilesPublished.png

You should copy this identifier to the clipboard ready for use in the next step. In your local clone of the workspace, select TortoiseHG ‣ Update... from the context menu. This will bring up the Update dialog.

_images/hgUpdate-1.png

In this dialog, you should paste the revision identifier copied above into the Update to: field (1) and then click the Update button (2).

_images/hgUpdate-2.png

Command line equivalent

hg update -r 9cad4365b0b8

You will now see in your local clone that the files have reverted back to that previous version. Loading this version of n62.cellml into OpenCOR and simulating for 5000 ms should result in the figure matching that presented in the earlier exposure page and reproduced here for convenience.

_images/revertedResults.png

Note in particular that there should now be the same six action potentials that were present in the published version of the model.

Recreating a published image segmentation study

In the previous MAP tutorial you created a new workflow for segmenting a set of images. In this scenario, we imagine that someone has previously published a similar study also using the MAP client and PMR. You can see an exposure for such an example here: http://teaching.physiomeproject.org/e/190/. In this tutorial, we use the MAP client to reproduce this study.

Import workflow

To recreate this published workflow, we need to import it into the MAP client. To do this, in the MAP client select File ‣ Import. This first step is to provide a folder in which to store the workflow on your local machine. You should create a new folder for this workflow. You will then be presented with the MAP PMR tool, you will need to make sure you have registered the MAP client with PMR. In the PMR tool you are able to search for the worklow you would like to import from the repository. In this case, search for the term autosegmenter (1). You will see there is only one result (2) which you should select and the click the OK button.

_images/autosegmenterSearch.png

This will import the workflow from the repository to your local folder and you should then see the workflow as shown in the exposure page referenced above. If you have the MAP client and plugins all working correctly, clicking Execute should result in an interactive view of the segmented data, as also shown on the exposure page referenced above.

Note

When saving changes in the MAP client currently, they are automatically saved to the workspace and pushed to the repository. In this example you will not have permission to push into the repository. This is a known limitation in the MAP client at present.

Best practice tips

Todo

This section.

Some tips for best practice:
  • It is essential to use a Mercurial client to obtain models from the repository for editing. The Mercurial client is not only able to keep track of all the changes you make (allowing you to back-track if you make any errors), but using a Mercurial client is the only way to add any changes you have made back into the repository. Some tools, like MAP, will hide the Mercurial details from the user.
  • By making your work publicly available in standard formats you significantly enhance the ability of other scientists to make use of your work. Similarly, when embarking on a new study it is worth checking the various repositories for existing work that you might be able to build on - such work should normally be discovered during a literature search.

Find an existing piece of work and extend it

In the preceeding tutorials, you have learnt how to create a new piece of work from scratch using the Physiome Model Respository and how to reproduce a "published" result. In this tutorial, we will demonstrate how to take an exisiting piece of work, stored in a public workspace, and develop it further to address a new goal.

Extending an existing CellML model

In this part of the tutorial, we will once again be making use of the Noble (1962) cardiac cellular electrophysiology model. We will be taking the model and making changes to alter its behaviour. For this, we will be using the version of the model published in the teaching instance of the repository: http://teaching.physiomeproject.org/e/183, but the process described below will also work in the main repository site.

Forking an existing workspace

Important

It is essential to use a Mercurial client to obtain models from the repository for editing. The Mercurial client is not only able to keep track of all the changes you make (allowing you to back-track if you make any errors), but using a Mercurial client is the only way to add any changes you have made back into the repository.

For this tutorial, we will fork an existing workspace. This creates a new workspace owned by you, containing a copy of all the files in the workspace you forked including their complete history. This is equivalent to cloning the workspace, creating a new workspace for yourself, and then pushing the contents of the cloned workspace into your new workspace.

Forking a workspace can be done using the Physiome Model Repository web interface. The first step is to find the workspace you wish to fork. We will use the EMBC 2013 Tutorial - Noble 1962 workspace from the exposure referenced above, which can be found at: http://teaching.physiomeproject.org/workspace/182.

Now click on the fork option in the toolbar, as shown below (1).

_images/forkN62.png

You will be asked to confirm the fork action by clicking the Fork button (2). You will then be shown the page for your forked workspace.

Cloning your forked workspace

In order to make changes to your workspace, you have to clone it to your own computer. To do this, follow the procedure as described in the earlier tutorial.

Quietening the self excitation

The version of the Noble 1962 model you have just forked and cloned is a model of a Purkinje fibre cell. These cells are capable of acting as pacemaker cells, although usually entrained by the sinoatrial node of the heart. The Noble model reproduces this behavior but is also able to simulate a non-pacing version of the cell model. This is accomplished by decreasing the potassium current which gives rise to the gradual depolarization of the member potential seen the figures from OpenCOR for the model in the previous tutorials. Once the cell is in a quiesent state, we are able to then apply an electrical stimulus to impose our own pacing regime.

If you load the n62.cellml file from the workspace you have just cloned into OpenCOR, set the duration of the simulation to 5000 ms, and plot the membrane potential V, you will be able to see the effect of altering the value of the variable g_K_add in the parameters component. As you increase this value you should see the resting potential decrease and the abolution of the self-exciting mechanism. A value of 0.001 mS_per_mmsq keeps the resting potential in the physiological range and makes the cell quiesent.

The version of OpenCOR we are using in this tutorial will not save the modified parameter value, so you will need to open the n62.cellml file in a text editor and make the change manually. In your text editor search for the g_K_add variable in the parameters component, as shown below.

_images/n62-gK-add-code.png

Set the initial_value attribute to the value you determined most suitable in OpenCOR. Reload the model into OpenCOR to confirm that the results are as expected, hopefully something similar to those shown below.

_images/n62-gK-add-modified-results.png

Now would be a good time to commit your changes to your clone of the workspace

Adding an electrical stimulation protocol

Now that we have a quiesent version of the Noble (1962) model, we are able to consider adding our own electrical stimulation protocol. If you open your current version of the n62.cellml document in your text editor again, you will see a component with the name stimulus_protocol as shown below.

_images/n62-stimulusProtocol-code.png

As you can see in this snippet of the XML source, there is a stimulus current variable, IStim, which is given a value of 0.0 uA_per_mm2. In this extension to the model we will replace this simple assignment of no stimulus current with a definition of a periodic applied stimulus. The code example below shows one way to encode such a periodic stimulus current in CellML.

<component cmeta:id="stimulus_protocol" name="stimulus_protocol">
  <variable name="IStim" public_interface="out" units="uA_per_mmsq"/>
  <variable name="time" public_interface="in" units="ms"/>
  <variable name="stimPeriod" initial_value="750" units="ms"/>
  <variable name="stimDuration" initial_value="1" units="ms"/>
  <variable name="stimCurrent" initial_value="400" units="uA_per_mmcu"/>
  <variable name="Am" initial_value="200" units="per_mm"/>
  <math xmlns="http://www.w3.org/1998/Math/MathML">
      <apply id="stimulus_calculation"><eq />
          <ci>IStim</ci>
          <piecewise>
              <piece>
                  <apply><divide/>
                      <ci>stimCurrent</ci>
                      <ci>Am</ci>
                  </apply>
                  <apply><lt/>
                      <apply><rem/>
                          <ci>time</ci>
                          <ci>stimPeriod</ci>
                      </apply>
                      <ci>stimDuration</ci>
                  </apply>
              </piece>
              <otherwise>
                  <cn cellml:units="uA_per_mmsq">0.0</cn>
              </otherwise>
          </piecewise>
      </apply>
  </math>
</component>

In the above example, we have introduced some new variable to control the frequency, duration, and magnitude of the applied stimulus current. If you replace the stimululs_protocol component in the n62.cellml model with the one above, you are able to load the new version of the model into OpenCOR and have a play with those variables to ensure they are behaving as expected. Note: you may need to set the Maximum step for CVODE to 0.1 or change to the Forward Euler integrator in OpenCOR to ensure that your specified stimulus in correctly detected by the numerical integration scheme.

Now would be a good time to commit your changes to your clone of the workspace and push them back to the model repository. You might also want to think about sharing your workspace with your neighbors or to have a look at creating an exposure for your workspace. To learn how to create exposures, please refer to Creating CellML exposures.

Wrapping your favorite tool as a MAP client plugin

The MAP framework is a very general purpose workflow-based tool. By taking advantage of the collaborative development and sharing features of the Physiome Model Repository, you have seen in the previous MAP tutorial how it is possible to share your work in a manner that makes it easy for other scientists to utilise and extend.

With the plugin approach used by the MAP client software, it is possible to wrap you favorite software tools to make them available as steps in a MAP workflow. For example, the previous tutorials have made use of the Zinc visualisation and field manipulation library.

The process of developing a MAP plugin is beyond the scope of this tutorial. If you are interested, there is the beginnings of some documentation for this process which will be developing in the near future. There is also a skeleton plugin that comes the MAP client software (in the plugins folder) which serves as the starting point for developing any new plugin and the github project which will contain a collection of MAP plugins. If you are interested in developing a plugin or would like help wrapping your favorite tool as a workflow step, please don't hesitate to contact the MAP team (or talk to Hugh today!).

Best practice tips

Todo

Complete this section

Creating a new piece of work from scratch -> encouraging best practices!
  • create workspace, commit often, useful log messages
  • provenance data (making sure user name/ID is set correctly)
  • share directly with collaborators
  • annotation?
  • creating exposures? link through to PMR documentation...?

Glossary

Clone
Clone is a Mercurial term that means to make a complete copy of a Mercurial repository. This is done in order to have a local copy of a repository to work in.
Embedded workspace
Embedded workspaces
A Mercurial concept that allows workspaces to be nested within other workspaces.
Exposure
Exposures

A publicly available page that provides access to and information about a specific revision of a workspace. Exposures are used to publish the contents of workspaces at points in time where the model(s) contained are considered to be useful.

Exposures are created by the PMR software, and offer views appropriate to the type of model being exposed. CellML files for example are presented with options such as code generation and mathematics display, whereas FieldML models might offer a 3D view of the mesh.

Fork
A copy of the workspace which includes all the original version history, but is owned by the user who created the fork.
Mercurial
Mercurial is a distributed version control system, used by the Physiome Model Repository software to maintain a history of changes to files in workspaces. See a tour of the Mercurial basics for some good introductory material.
Pull
Pulling
The term used with distributed version control systems for the action of pulling changes from one clone of the repository into another. With PMR, this usually implies pulling from a workspace in the model repository into a clone of the workspace on your local machine.
Push
Pushing
The term used with distibuted version control systems for the action of pushing changes from one clone of the repository into another. With PMR, this usually implies pushing from a workspace clone on your local machine back to the workspace in the model repository, but could be into any other clone of the workspace. See a tour of the Mercurial basics for some good introductory material.
Python
Python is a programming language that lets you work more quickly and integrate your systems more effectively. See http://python.org for all the details.
Synchronize
Used to pull the contents or changes from other Mercurial repositories into a workspace via a URI.
Workspace
Workspaces
A Mercurial repository hosted on the Physiome Model Repository. This is essentially a folder or directory in which files are stored, with the added feature of being version controlled by the distributed version control system called Mercurial.

Tutorial to do list

General

Todo

  • Add many more references (.. _like-this:) to docs for cross-referencing.
  • make sure all references to the staging instance are updated to teching.physiomeproject.org

Within sections

Todo

  • Add many more references (.. _like-this:) to docs for cross-referencing.
  • make sure all references to the staging instance are updated to teching.physiomeproject.org

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/abi-software-book/checkouts/latest/ABIBook-TODO.rst, line 10.)

Todo

Add some useful command examples to command window doc - eg. "saving" graphics

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/abi-software-book/checkouts/latest/Cmgui/index.rst, line 34.)

Todo

Need to check this section on obtaining models via mercurial.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/abi-software-book/checkouts/latest/PMR/PMR-downloading-viewing.rst, line 76.)

Todo

This section needs more work.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/abi-software-book/checkouts/latest/PMR/PMR-embeddedworkspaces.rst, line 9.)

Todo

  • Update all PMR documentation to reflect workspace ID changes and user workspace changes, if they go ahead.
  • Get embedded workspaces doc written.
  • Get some best practice docs written.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/abi-software-book/checkouts/latest/PMR/index.rst, line 29.)

Todo

Complete this section

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/abi-software-book/checkouts/latest/tutorials/embc13/scenario1/bestpractice.rst, line 6.)

Todo

This section.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/abi-software-book/checkouts/latest/tutorials/embc13/scenario2/bestpractice.rst, line 6.)

Todo

Complete this section

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/abi-software-book/checkouts/latest/tutorials/embc13/scenario3/bestpractice.rst, line 6.)

Indices and tables

ABI research case studies

Examples of research at the ABI - goals, tools and techniques used, outcomes.

To serve as examples of how to use the available tools, as well as promotional material.

If you would like your research to appear in this section, please let me know, or fork this documentation on GitHub!

About the ABI

History, research groups, opportunities etc etc.

Contents:

ABI Resources

Documentation covering what resources are available for Msc and PhD students to carry out their research.

Computer hardware and software
  • Details of computers available to students.
  • Operating system options
  • List of software packages available for installation
  • License issues
  • Information about help, IT staff etc.
workshop
Other

ABI Research Groups

CAP user documentation

CM user documentation

CellML API user documentation

Exporting ip files for CM from cmgui

You can export files in the ip format used by CM using the gfx export command, for example:

gfx export cm field coordinates ipcoor FILENAME.ipcoor ipbase FILENAME.ipbase ipmap FILENAME.ipmap ipnode FILENAME.ipnode ipelem FILENAME.ipelem region "/"

Testing MathJax LaTeX support

Just a test. Also, a cheeky little test of PEP 3118: New memoryview implementation and buffer protocol documentation.

First test

\[(a + b)^2 = a^2 + 2ab + b^2\]\[(a - b)^2 = a^2 - 2ab + b^2\]

Some text

\[z \left( 1 \ +\ \sqrt{\omega_{i+1} + \zeta -\frac{x+1}{\Theta +1} y + 1} \ \right) \ \ \ =\ \ \ 1\]

More intervening text

\[\frac{d}{dx}\left( \int_{0}^{x} f(u)\,du\right)=f(x).\]

Does this get stripped?

This is a paragraph.

Probably.

MAP Client

The MAP Client is a cross-platform framework for managing workflows. MAP Client is a plugin-based application that can be used to create workflows from a collection of workflow steps.

The MAP Client is an application written in Python and based on Qt the cross-platform application and UI framework. Further details are available in the documents listed below.

MAP Installation and Setup Guide

This document describes how to install and setup the MAP software for use on your machine. The MAP software is a Python application that uses the PySide Qt library bindings. The instructions in this document cover the installation and setup on a Windows based operating system. The instructions for GNU/Linux and OS X are similar and should be extrapolated from these instructions. There are some side notes for these other operating systems to help, but not full or dedicated instructions. If for any reason you get stuck and cannot complete the instructions please contact us.

MAP

The MAP framework is written in Python and is designed to work with Python 2 and Python 3. The MAP application is tested against Python2.6, Python2.7 and Python3.3 and should work with any of these Python libraries. Currently the MAP framework is not packaged as an application, requiring the user to set up the environment prior to launching the mapclient.py executable Python script.

The MAP application consists of the framework and various tools, by itself it can do very little. It is the job of the plugins to provide functionality. The MAP application as referred to in this section of the instructions may be described as the barebones application for this reason.

To execute the barebones application we need to first install some dependencies:

  1. Python (and make sure to add the Python and Python\Scripts folders to your system PATH).
  2. PySide (PySide and PyQt4 are virtually interchangeable but currently this would require some textual changes)
  3. Python setup tools and then using easy_install.exe to install:
  1. Requests Python library (easy_install requests)
  2. OAuthlib Python library (not the OAuth Python library) (easy_install oauthlib).

Also, if we wish to interact with the Physiome Model Repository (PMR) we need:

We can now install the barebones MAP client application. The barebones application can be launched via the command window with the following command in the extracted mapclient/src folder:

mapclient.py

which should result in an application window similar to that shown below.

_images/mapClientBarebones.png

Now that the barebones MAP application is installed and running we can move on to some useful plugins.

MAP Plugins

The installation of MAP plugins simply requires obtaining the plugins and then using the MAP plugin manager to let the MAP client know where to look for plugins. Furthermore, there is a github project which is used to provide a common collection of MAP plugins. For the purposes of this tutorial, the autosegmentationstep plugin will be used. You can download a copy of the plugin, extract it, and then follow the instructions for adding the folder in which you extracted the plugin to the MAP plugin manager.

Zinc and PyZinc

Zinc is an advanced field manipulation and visualisation library and PyZinc provides Python bindings to the Zinc library. Binaries are available for download for Linux, Windows, and OS X. The MAP client is able to make use of Zinc for advanced visualisation and image processing steps. To get PyZinc installed, follow these steps:

  1. Install Zinc using either: the Windows installer (ensuring that you enable the option for the installer to add Zinc to the system PATH); or unzip the archive and manually copy library file to somewhere on your PATH (which could include the PyZinc installation folder).
  2. Unzip the downloaded PyZinc archive.
  3. In a command window, change into folder PyZinc extracted into.
  4. Execute the following command: python setup.py install (this uses a similar mechanism as the easy_instal software above..

You can check that you have Zinc and PyZinc correctly installed and functional by running the volume_fitting.py application provided with the tutorial materials. If Zinc and PyZinc are working you should get an application window similar to that shown below with the interactive three-dimensional model viewer shown. Note you will need to restart the command window after installing PyZinc in order to refresh the system PATH.

_images/volumeFitting.png
Which Binary?

There are a number of binaries available for any given platform and you must match the package description with your system setup. The package description contains the package name, package version, package architecture, package operating system and in the case of PyZinc the package Python version. The package extension indicates the type of package and they come in two main flavours: installer/package manager; archive.

Additionally the version of the PyZinc binaries you download must match the version of the Zinc library binaries.

MAP Features Demonstration

Section author: Hugh Sorby

Note

MAP is currently under active development, and this document will be updated to reflect any changes to the software or new features that are added. You can follow the development of MAP at the launchpad project.

This document details the features of MAP, a cross-platform framework for managing workflows. MAP is a plugin-based application that can be used to create workflows from a collection of workflow steps.

In this demonstration is based on version 0.9.0 of MAP, available from the project downloads. Directions for installing MAP and getting the MAP plugins are available in the MAP Installation and Setup Guide.

In this demonstration we will cover the features of MAP. We will start with a quick tour and then create a new workflow that will help us segment a region of interest from a stack of images.

Quick Tour

When you first load MAP, it will look something like this:

_images/blank_MAP_1.png

In the main window we can see three distinct areas that make up the workflow management side of the software. These three areas are the menu bar (at the top), the step box (on the left) that contains the steps that you can use to create your workflow and the workflow canvas (on the right) an area for constructing a workflow.

In the Step box we will only see two steps, this is because we have only loaded the default Steps and not loaded any of the external plugins that MAP can use.

Step Box

The Step box provides a selection of steps that are available to construct a workflow from. The first time we start the program only the default plugins are available. To add more steps we can use the Plugin Manager tool. To use a step in our workflow we drag the desired step from the step box onto the workflow canvas.

Workflow canvas

The workflow canvas is where we construct our workflow. We do this by adding the steps to the workflow canvas from the step box that make up our workflow. We then make connections between the workflow steps to define the complete workflow.

When a step is added to the workflow the icon which is visible in the Step box is augmented with visualisations of the Steps ports and the steps configured status. The annotation of the steps ports will show when the mouse is hovered over a port. The image below shows the Image Source step with the annotation for the port displayed.

_images/step_with_port_info_displayed_1.png
Tools

MAP currently has three tools that may be used to aide the management of the workflow. They are the Plugin Manager tool, the Physiome Model Repository (PMR) tool and the Annotation tool. For a description of each tool see the relevant sections.

Plugin Manager Tool

The plugin tool is a simple tool that enables the user to add or remove additional plugin directories. MAP comes with some default plugins which the user can decide to load or not. External directories are added with the add directory button. Directories are removed by selecting the required directory in the Plugin directories list and clicking the remove directory button.

Whilst additions to the plugin path will be visible immediately in the Step box deletions will not be apparent until the next time the MAP Client is started. This behaviour is a side-effect of the Python programming language.

_images/plugin_manager_1.png
Physiome Model Repository (PMR) Tool

The PMR tool uses webservices and OAuth to communicate between itself (the consumer) and the PMR website (the server). Using this tool we can search for and find suitable resources on PMR.

The PMR website uses OAuth to authenticate a consumer and determine consumer access privileges. Here we will discuss the parts of OAuth that are relevant to getting you (the user) able to access resources on PMR.

In OAuth we have three players the server, the consumer and the user. The server is providing a service that the consumer wishes to use. It is up to the user to allow the consumer access to the servers resources and set the level of access to the resource. For the the consumer to access privileged information of the user stored on the server the user must register the consumer with the server, this is done by the user giving the consumer a temporary access token. This temporary access token is then used by the consumer to finalise the transaction and acquire a permanent access token. The user can deny the consumer access at anytime by logging into the server and revoking the permanent access token.

If you want the PMR tool to have access to privileged information (your non-public workspaces stored on PMR) you will need to register the PMR tool with the PMR website. We do this by clicking on the register link as shown in the figure below. This does two things: it shows the Application Authorisation dialog; opens a webbrowser at the PMR website. [If you are not logged on at the PMR website you will need to do so now to continue, instructions on obtaining a PMR account are availble here]. On the PMR website you are asked to either accept or deny access to the PMR tool. If you allow access then the website will display a temporary access token that you will need to copy and paste into the Application Authorisation dialog so that the PMR tool can get the permanent access token.

_images/PMRTool_1.png
Annotation Tool

The Annotation tool is a very simple tool to help a user annotate the Workflow itself and the Step data directories that are linked to PMR. At this stage there is a limited vocabulary that the Annotation tool knows about, but this is intended to be extended in coming releases. The vocabulary that the annotation is aware of is available in the three combo-boxes near the top of the dialog.

_images/top_annotation_1.png

The main part of the Annotation tool shows the current annotation from the current target.

_images/main_annotation_1.png

In the above image we can see the list of annotations that have been added to the current target. This is a simplified view of the annotation with the prefix of the terms removed for clarity.

MAP Tutorial - Create Workflow

Section author: Hugh Sorby

Note

MAP is currently under active development, and this document will be updated to reflect any changes to the software or new features that are added. You can follow the development of MAP at the launchpad project.

This document details takes the reader through the process of creating a workflow from existing MAP plugins. Having a read through the MAP Features Demonstration is a good way to become familiar with the features of the MAP application.

Getting Started

To get started with MAP we need to create a new workflow. To do this we use File ‣ New ‣ Workflow menu option (Ctrl-N shortcut). This option will present the user with a directory selection dialog. Use the dialog to select a directory where the workflow can be saved. Once we have chosen a directory the step box and workflow canvas will become enabled.

To create a meaningful workflow we will need to use some external plugins. To load these plugins we will use the Plugin Manager tool. The Plugin Manager tool can be found under the Tools menu. Use the Plugin Manager to add the directory location of the MAP Plugins. After confirming the changes to the Plugin Manager you should see a few new additions to the Step box.

Creating the Workflow

To create a workflow we use Drag 'n' Drop to drag steps from the Step box and drop the step onto the workflow canvas. When steps are first dropped onto the canvas they show a red cog icon to indicate that the step is not configured. At a minimum a step requires an identifier to be set before it can be used.

Drag the steps Image Source, Data Store and Automatic Segmenter onto the workflow canvas. All the steps will show a red cog, except the 'Automatic Segmenter', step this indicates that the step needs to be configured. To configure a step we can either right click on the step to bring up a context menu from which the configure action can be chosen or simply click the red cog directly. See the relevant section for the configuration of a particular step.

Note

When configuring a step you are asked to set an identifier. The identifier you set must unique within the workflow and it must not start with a '.'.

Configuring the Image Source Step

The Image Source step requires a location. This location contains the images to import. The location may be a directory on the local hard disk or a workspace on PMR. Here we will show how to configure the Image Source step with images that have been stored in a workspace on PMR.

First each step requires a unique id. The id is used to create a file containing the step configuration information. This id for the Image Source step is used to create a directory under the workflow project directory. In the identifier edit box enter a directory name. Once a valid identifier is entered the red highlight around the edit box will be turned off.

Next change to the PMR tab and we will see an ellipses button for bringing up the PMR tool dialog. You need to register the PMR tool to access certain webservices the details on how to do this are available here. The remainder of this tutorial will assume you have setup access to PMR properly. In the search box of the PMR dialog we need to enter the search term 'blood-vessels'. The result of the search should look like the image below.

_images/PMRTool_2.png

Select this entry in the search listing and click 'Ok'. The selected PMR workspace will be downloaded in the background. When the download is finished the red cog icon will disappear. If the download is not successful a dialog will appear to inform you of the error.

MAP is not setup to work with streamed resources so we must download the workspace from PMR.

Configuring the Point Cloud Step

Configuring the Point Cloud step is trivial at this time. This is because the step only requires an identifier to be set. The identifier will be used to create a directory where the received point cloud will be serialized.

Executing the Workflow

At this point you should have a workflow area looking like this:

_images/configured_MAP_1.png

Once the All the steps in the workflow are configured (no more red cog icons) we can make connections between the steps. To make a connection between two steps the first step must provide what the second step uses. When trying to connect two steps that cannot be connected you will see a no entry icon over the connection for a short period of time and then the connection will be removed. The following image shows an incorrect connection trying to be made.

_images/error_connection.png

If the mouse is hovered over a port you will see a description of what the port provides or uses. To make a connection click on a port and drag the mouse to the port to be connected.

To execute the workflow we need to connect up the steps in the correct manner and save the workflow. The workflow should be connected up as can be seen in the following image.

_images/connected_MAP_1.png

Once the workflow has been saved the execute button in the lower left corner should become enabled. Clicking the execute button will, naturally enough, execute the workflow step by step.

Note

We can make connections between steps at anytime not just when all steps have been properly configured.

Automatic Segmenter Step

The 'Automatic Segmenter' actually allows us to interact with executing workflow. With this step we can move the image plane up and down and change the visibility of the graphical items in the scene. The image plane is moved through the use of the slider on the left hand side. The visibility of the graphical items is controlled by checking or unchecking the relevant check boxes.

MAP Plugins

The Plugin lies at the heart of the MAP framework. The key idea behind the plugins is to make them as simple as possible to implement. The interface is defined in documentation and the plugin developer is expected to adhere to it. The framework leaves the responsibility of conforming to the plugin interface up to the plugin developer. The plugin framework is based on Marty Alchin's [1] article on a plugin framework for Django. The plugin framework is very lightweight and requires no external libraries and can be made to work with Python 2 and Python 3 simultaneously.

Workflow Step

The Workflow Step is the basic item that a plugin developers need to place their software within. A workflow step can be of any size and complexity. Although it must adhere to the plugin design to work properly with the application. Every step that wishes to act like a Workflow Step must derive itself from the Workflow step mountpoint. The Workflow step mountpoint is the interface between the application and the plugin. The Workflow step mountpoint can be imported like so:

from mountpoints.workflowstep import WorkflowStepMountPoint

A skeleton step is provided as a starting point for the developer to create their own workflow steps. The skeleton step is actually a valid step in its own right and it will show up in the Step box if enabled. However the skeleton step has no use other than as an item to drag around on the workflow area. The skeleton step is discussed below, first however the plugin interface is discussed.

Plugin Interface

The plugin interface is the layer between the application and the developers plugin. The plugin interface is not defined by contract as we so often see in Java. But rather the plugin interface is defined by documentation. This puts the burden of the specification on the documentation and the conformity of the specification on the developer. The underlying theory is that the developer is able to follow the specification without the application having to do rigourous checks to make sure this is the case. The phrase 'If it walks like a duck' is often used.

In this section the specification of the Workflow step plugin interface is given. It is then upto the developer to make sure their plugin behaves like one.

The details of the plugin interface are provided in the documentation of the source code in the relevant source file and additionally here for easy reference. The documentation provided with the source code is very direct with little explanation the following documentation provides a bit more explanation and discussion on the various aspects of the plugin interface. The documentation provided here should be considered the slave documentation and the documentation provided with the source code as the master documentation.

There are essentially, what may be considered, three different levels of the plugin design.

  1. The Musts
  2. The Shoulds
  3. The Coulds

Creating a workflow step that satisifies the musts will create an actual workflow step that can be added to the workflow area and interacted with. But it won't be very useful. Satisfying the shoulds will usually be sufficient for the very simplest of steps. Simple steps are for instance ones that provide images, or location information for data. Doing some of the coulds will create a much more interesting step.

The requirements for creating a step have been kept as simple as possible, this is to allow the developer a quick route into the development of the step content.

The following three sections discuss these three levels in more detail.

A Step Must
  • The plugin must be derived from the WorkflowStepMountPoint class defined in the package mountpoints.workflowstep

  • Accept a single parameter in it's __init__ method.

  • Define a name for itself, this must be passed into the initialisation of the base class.

  • Define the methods

    def configure(self):
        pass
    
    def getIdentifier(self):
        pass
    
    def setIdentifier(self, identifier):
        pass
    
    def serialize(self, location):
        pass
    
    def deserialize(self, location):
        pass
    
A Step Should
  • Implement the configure method to configure the step. This is typically in the form of a dialog. When implementing this function the class method self._configureObserver() should be called to inform the application that the step configuration has finished.
  • Implement the getIdentifier/setIdentifier methods to return the identifier of the step.
  • Implement the serialize/deserialize methods. The steps should serialize and deserialize from a file on disk located at the given location.
  • Define a class attribute _icon. That is of the type QtGui.QImage.
  • Information about what the step uses and/or what it provides. This is achieved through defining ports on the step.
A Step Could
  • Implement the method 'portOutput(self)' if it was providing some information to another step.
  • Implement the method 'execute(self, dataIn)' if it uses some information from another step. If a step implements the 'execute(self, dataIn)' method then it must call '_doneExecution()' when the step is finished.
  • Define a category using the '_category' attribute. This attribute will add the step to the named category in the step box, or it will create the named category if it is not present.
Ports

A port is a device to specify what a workflow step provides or uses. A port is described using Resource Description Framework (RDF) triples. The port description is used to determine whether or not two ports may be connected together. One port can either use or provide one thing. A single port must not both provide a thing and use a thing. Ports are ordered by entry position.

Ports are added by using the 'addPort(self, triple)' method from the base class.

Skeleton Step

The skeleton step satisfies the musts of the plugin interface. It is a minimal step and it is set out as follows.

A Python package with the step name is created, in this case 'skeletonstep', in the module file we add the code that needs to be read when the plugins are loaded.

The module file performs four functions. It contains the version information and the authors name of the module. For instance the skeleton step has a version of '0.1.0' and authors name of 'Xxxx Yyyyy'. It adds the current directory into the Python path, this is done so that the steps python files know where they are in relation to the python path. It also (optionally) prints out a message showing that the plugin has been loaded successfully. But the most important function it performs is to call the python file that contains the class that derives from the workflow step mountpoint.

The 'SkeletonStep' class in the skeletonstep.step package is a very simple class. It derives from the 'WorkflowStepMountPoint', calls the base class with the name of the step, accepts a single parameter in it's init method and defines the five required functions to satisfy the plugin interface.

When enabled the skeleton step will be a fully functioning step in the MAP Client.

References

[1] http://martyalchin.com/2008/jan/10/simple-plugin-framework/ Marty Alchin on January 10, 2008

MAP Tutorial - Create Plugin

Section author: Hugh Sorby

Note

MAP is currently under active development, and this document will be updated to reflect any changes to the software or new features that are added. You can follow the development of MAP at the launchpad project.

This document details takes the reader through the process of creating a new plugin for the MAP Client. The MAP Plugins document defines the plugin interface that the new plugin must adhere to.

A Simple Source Step Example

We need to create a source step for supplying Zinc model files.

First copy the skeletonstep directory to another directory. To make this step our own we first change the skeletonstep name to zincmodelsourcestep. The places we have to change are:

  1. The topmost directory

  2. The inner directory, this directory is used to namespace our new step.

  3. In __init__.py file in the topmost directory, we also need to uncomment the lines:

    from zincmodelsourcestep import step print("Plugin '{0}' version {1} by {2} loaded".format(tail, __version__, __author__))

  1. In __init__.py file in the inner directory. We have to change the name of the class to 'ZincModelSourceStep' and change the name of the step to 'Zinc Model Source'.

Now we need to be able to configure the step. To do this we can use qt-designer to create a 'configuredialog.ui' file that we can convert into Python code using 'pyside-uic'.

MAP Installation and Setup Guide

This document describes how to install and setup the MAP software for use on your machine. The MAP software is a Python application that uses the PySide Qt library bindings. The instructions in this document cover the installation and setup on a Windows based operating system. The instructions for GNU/Linux and OS X are similar and should be extrapolated from these instructions. There are some side notes for these other operating systems to help, but not full or dedicated instructions. If for any reason you get stuck and cannot complete the instructions please contact us.

MAP

The MAP framework is written in Python and is designed to work with Python 2 and Python 3. The MAP application is tested against Python2.6, Python2.7 and Python3.3 and should work with any of these Python libraries. Currently the MAP framework is not packaged as an application, requiring the user to set up the environment prior to launching the mapclient.py executable Python script.

The MAP application consists of the framework and various tools, by itself it can do very little. It is the job of the plugins to provide functionality. The MAP application as referred to in this section of the instructions may be described as the barebones application for this reason.

To execute the barebones application we need to first install some dependencies:

  1. Python (and make sure to add the Python and Python\Scripts folders to your system PATH).
  2. PySide (PySide and PyQt4 are virtually interchangeable but currently this would require some textual changes)
  3. Python setup tools and then using easy_install.exe to install:
  1. Requests Python library (easy_install requests)
  2. OAuthlib Python library (not the OAuth Python library) (easy_install oauthlib).

Also, if we wish to interact with the Physiome Model Repository (PMR) we need:

We can now install the barebones MAP client application. The barebones application can be launched via the command window with the following command in the extracted mapclient/src folder:

mapclient.py

which should result in an application window similar to that shown below.

_images/mapClientBarebones.png

Now that the barebones MAP application is installed and running we can move on to some useful plugins.

MAP Plugins

The installation of MAP plugins simply requires obtaining the plugins and then using the MAP plugin manager to let the MAP client know where to look for plugins. Furthermore, there is a github project which is used to provide a common collection of MAP plugins. For the purposes of this tutorial, the autosegmentationstep plugin will be used. You can download a copy of the plugin, extract it, and then follow the instructions for adding the folder in which you extracted the plugin to the MAP plugin manager.

Zinc and PyZinc

Zinc is an advanced field manipulation and visualisation library and PyZinc provides Python bindings to the Zinc library. Binaries are available for download for Linux, Windows, and OS X. The MAP client is able to make use of Zinc for advanced visualisation and image processing steps. To get PyZinc installed, follow these steps:

  1. Install Zinc using either: the Windows installer (ensuring that you enable the option for the installer to add Zinc to the system PATH); or unzip the archive and manually copy library file to somewhere on your PATH (which could include the PyZinc installation folder).
  2. Unzip the downloaded PyZinc archive.
  3. In a command window, change into folder PyZinc extracted into.
  4. Execute the following command: python setup.py install (this uses a similar mechanism as the easy_instal software above..

You can check that you have Zinc and PyZinc correctly installed and functional by running the volume_fitting.py application provided with the tutorial materials. If Zinc and PyZinc are working you should get an application window similar to that shown below with the interactive three-dimensional model viewer shown. Note you will need to restart the command window after installing PyZinc in order to refresh the system PATH.

_images/volumeFitting.png
Which Binary?

There are a number of binaries available for any given platform and you must match the package description with your system setup. The package description contains the package name, package version, package architecture, package operating system and in the case of PyZinc the package Python version. The package extension indicates the type of package and they come in two main flavours: installer/package manager; archive.

Additionally the version of the PyZinc binaries you download must match the version of the Zinc library binaries.

PMR best practice - embedded workspaces

CellML Curation in Legacy Repository Software

As PMR contains much of the data ported over from the legacy software products that powered the CellML Model Repository, the curation system from that system was ported to PMR verbatim. This document describing the curation aspect of the repository is derived from documentation on the CellML site.

CellML Model Curation: the Theory

The basic measure of curation in a CellML model is described by the curation level of the model document. We have defined four levels of curation:

  • Level 0: not curated.
  • Level 1: the CellML model is consistent with the mathematics in the original published paper.
  • Level 2: the CellML models has been checked for (i) typographical errors, (ii) consistency of units, (iii) that all parameters and initial conditions are defined, (iv) that the model is not over-constrained, in the sense that it contains equations or initial values which are either redundant or inconsistent, and (v) that running the model in an appropriate simulation environment reproduces the results published in the original paper.
  • Level 3: the model is checked for the extent to which it satisfies physical constraints such as conservation of mass, momentum, charge, etc. This level of curation needs to be conducted by specialised domain experts.

CellML Model Curation: the Practice

Our ultimate aim is to complete the curation of all the models in the repository, ideally to the level that they replicate the results in the published paper (level 2 curation status). However, we acknowledge that for some models this will not be possible. Missing parameters and equations are just one limitation; at this point it should also be emphasised that the process of curation is not just about "fixing the CellML model" so that it runs in currently available tools. Occasionally it is possible for a model to be expressed in valid CellML, but not yet able to be solved by CellML tools. An example is the seminal Saucerman et al. 2003 model, which contains ODEs as well as a set of non-linear algebraic equations which need to be solved simultaneously. The developers of the CellML editing and simulation environment OpenCell are currently working on addressing these requirements.

The following steps describe the process of curating a CellML model:

  • Step 1: the model is run through OpenCell and COR. COR in particular is a useful validation tool. It renders the MathML in a human readable format making it much easier to identify any typographical errors in the model equations. COR also provides a comprehensive error messaging system which identifies typographical errors, missing equations and parameters, and any redundancy in the model such as duplicated variables or connections. Once these errors are fixed, and assuming the model is now complete, we compare the CellML model equations with those in the published paper, and if they match, the CellML model is awarded a single star - or level 1 curation status.
  • Step 2: Assuming the model is able to run in OpenCell and COR, we then go onto compare the CellML model simulation output from COR and OpenCell with the published results. This is often a case of comparing the graphical outputs of the model with the figures in the published paper, and is currently a qualitative process. If the simulation results from the CellML model and the original model match, the CellML model is awarded a second star - or level 2 curation status.
  • Step 3: if, at the end of this process, the CellML model is still missing parameters or equations, or we are unable to match the simulation results with the published paper, we seek help from the original model author. Where possible, we try to obtain the original model code, and this often plays an invaluable role in fixing the CellML model.
  • Step 4: Sometimes we have been able to engage the original model author further, such that they take over the responsibility of curating the CellML model themselves. Such models include those published by Mike Cooling and Franc Sachse. In these instances the CellML model is awarded a third star - or level 3 curation status. While this is laudable, ideally we would like to take the curation process one step further, such that level 3 curation should be performed by a domain expert who is not the author of the original publication (i.e., peer review). This expert would then check the CellML model meets the appropriate constraints and expectations for a particular type of model.

A point to note is that levels 1 and 2 of the CellML model curation status may be mutually exclusive - in our experience, it is rare for a paper describing a model to contain no typographical errors or omissions. In this situation, Version 1 of a CellML model usually satisfies curation level 1 in that it reflects the model as it is described in the publication - errors included, while subsequent versions of the CellML model break the requirements for meeting level 1 curation in order to meet the standards of level 2. Taking this idea further, this means that a model with 2 yellow stars doesn't necessarily meet the requirements of level 1 curation but it does meet the requirements of level 2. Hopefully this conflict will be resolved when we replace the current star system with a more meaningful set of curation annotations.

Ultimately, we would like to encourage the scientific modeling community - including model authors, journals and publishing houses - to publish their models in CellML code in the CellML model repository concurrent with the publication of the printed article. This will eliminate the need for code-to-text-to-code translations and thus avoid many of the errors which are introduced during the translation process.

CellML Model Simulation: the Theory and Practice

As part of the process of model curation, it is important to know what tools were used to simulate (run) the model and how well the model runs in a specific simulation environment. In this case, the theory and the practice are essentially the same thing, and carry out a series of simulation steps which then translate into a confidence level as part of a simulator's metadata for each model. The four confidence levels are defined as:

  • Level 0: not curated (no stars);
  • Level 1: the model loads and runs in the specified simulation environment (1 star);
  • Level 2: the model produces results that are qualitatively similar to those previously published for the model (2 stars);
  • Level 3: the model has been quantitatively and rigorously verified as producing identical results to the original published model (3 stars).

Physiome Model Repository web interface reference

Section author: Dougal Cowan

This document describes the various components of the PMR web interface, which provides access to most of the functions of the PMR software via a web browser.

Physiome Project, CellML, and FieldML views

The Physiome Model Repository can be accessed via three different URLs, each of which gives a specific view of the repository:

Each URL also provides a contextual search. Searching on the FieldML or CellML URLs will only yield FieldML or CellML results respectively, whereas all model types are included in search results using the Physiome Project URL.

Guest view

Browsing the PMR without logging in means you will be restricted to viewing published workspaces only. Browsing the repository in this way is only useful for searching, viewing, and downloading published models.

The main navigation bar will only show the Models Home and Exposures buttons.

_images/PMR-guestviewhome.png

The home page of the PMR when not logged in

The search box at the top right hand side of the page can be used to find published workspaces or exposures using words such as Author name, physiological or biological terms, or other specific words of interest.

Registered view

Logging in to the repository will provide you with a range of additional functions in the PMR web interface.

_images/PMR-registeredviewhome.png

The home page of the PMR when logged in as a standard user

An addtional option appears in the navigation bar when you are logged in to the repository site, called My Workspaces. Clicking on this will show you a page that lists all of the workspaces you have created on the repository. The page also contains a link to your workspace container, where you can create new workspaces and manage existing ones.