Automated Modular Construction of Models
From IMAG_Wiki
Back To Model Sharing Working Group
Back To General Consortium Discussions
On Weds Feb 22 at 19:00 James B. Bassingthwaighte <jbb2@u.washington.edu> wrote:
Gary, thanks so very much. One tutorial, 'Compartmental" is set up to serve as an introduction.
www.physiome.org -> Models -> model search -> tutorials -> compartmental.
There's an introductory essay to read allong with a series of models.
Our site needs hundreds of improvements but we're too occupied with improving JSim (Try the new Monte Carlo for parameter confidence limit estimation after optimizing to fit data.
Cheers, and thanks.
jim
From Gary An <docgca@gmail.com> at 18:06 on Tuesday, Feb 21, 2012
Dear James,
Thanks for the email and the information; I'll take a deeper look at the Physiome and JSim sites. I, as you, did very much like James Bowers' GENESIS book/tutorial; the maturity of the computational neuroscience community is something that we can hopefully strive to emulate. In particular, the apparent fidelity (perhaps as an outcome of that maturity?) of the mapping between the mathematical representations and the neurobiology to me means that in this domain (at least to an external observer) the ability of neuroscientists to "think" in terms of the mathematical models clears that initial cognitive barrier that seems to be so present when talking to biologists (of which I am one). My feeling is that the first step towards a biologist's meaningful engagement with computational modeling is to ease that mapping process; my impression is that this is the intent of the JSim GUI, yes?
As I retain a substantial degree of computational naivete I generally I think of myself as a pretty good measuring bar in terms of assessing the accessibility of modeling toolkits for the non-computationally inclined biologist, so I'll give JSim a whirl and let you know how it goes.
Regards, Gary
From James B. Bassingthwaighte <jbb2@u.washington.edu> at 13:53 on Tuesday, Feb 21, 2012
James, I like your GENESIS a lot, and the on-line book is well written. You start at a higher level than we do with our "compartmental tutorial" but it flows well. Have a look at our tutoriuals (so-called since they are very primitve compared to yours) at www.physiome.org
We have translators back and forth between JSim and CellML and SBML and expect JSIm -> MAtlab pretty soon.
Cheers,
Jim
On Tues, Feb 21 2012 at 13:20 James B. Bassingthwaighte <jbb2@u.washington.edu> wrote
Dear Gary,
All of, or none of, our working groups address the question of how to get people started into thinking about modeling analysis for doing science. We have just finished a 4-year NHLBI T15 project to teach courses and assist people with this phase. The results are reflect in our website www.physiome.org where we have a lot of elementary models. See the tutorials particularly, e.g. the compartmental tutorial, where there is a sketchy narrative to guide people through the sequence. We want to develop more of these. Would you be interested in critquing what we have and working with us on improving them, or devloping a parallel set. Erik B abd Lucian Smith are working on a JSIm tomatlab translator. Since JSim is so much easier than matlab for ODEs and particularly PDEs, it may be that JSim can be a starter front end for MAtlab?
Jim
From: james bower <bower@uthscsa> 1/18/2012 4:32 PM
Having spent a long long time working on this problem - there is no doubt that it is a core issue. In fact, most of our efforts over many years have been focused explicitly on how to engage biologists in modeling efforts, starting with the Methods in Computational Neuroscience Course I co- founded in 1988 at the Marine Biological Laboratory. Many, perhaps even most efforts (and the course since actually) are often focused on giving physicists, mathemiticians, etc, more knowledge about biology. I would note, however, that Newton was an experimentalist first, whose early mentor told him if he was going to amount to anything, he would need to learn math.
Anyway, this is an issue I have been concerned about for many years.
You might take a look at the book of GENESIS available on the GENESIS web site - which is an effort to provide a tutorial approach to both computational neuroscience (the first part of the book) and then how to use computational tools (the second part of the book).
http://www.genesis-sim.org/GENESIS/bog/bog.html
We are now working very hard to incorporate these and additional tutorials into the latest version of GENESIS (G-3) which will include new tools for both model sharing and multi-scale modeling.
On Wed, 18 Jan 2012 at 12:31, hsauro <hsauro@u.washington.edu> wrote:
This is hard to say because there is so much overlap between some of the groups.Software, standards, data, validation, simulation, etc etc are so intertwined.
Herbert
From: Gary An [1] on Wednesday, January 18, 2012 3:24 PM
Hi all,
A question to Jim and Herbert then; do you think the current scope of the Model Sharing Working Group encompasses the issues you both note, or whether it might be beneficial to have a separate or sub- working group more focused on what you are talking about.
For myself, my interest is getting upstream to both of these points, to helping non-computer modeling biology researchers express and transfer their knowledge in computational form; clearly the things you are both talking about (the model taxonomy and the module/standards aspect) are critical targets of trying to "semi-automate" and facilitate the modeling process, but to a great degree these presuppose a great deal of existing familiarity and expertise with the process of mapping biological concepts to computational forms (i.e. they need to know what an ODE or PDE is, when you use one, when you use the other, the types of data you would use for each, etc.). I think of the initial mapping process as something that can be augmented, and because of that I would be in favor of a new working group on this subject. My proposed goals and objectives for this new working group would go something like:
The goals of this working group are to develop methods and software systems to familiarize the general biomedical research community with the process of computational modeling and enhance their engagement in computational research, specifically targeting the mapping of biological knowledge to computational methods, providing guidance with respect to the capabilities and limitations of those methods, and guidance in the design, execution and role of simulation experiments. These software tools would be compliant with and aid the dissemination of standards developed in conjunction with the Model Sharing Working Group, and would provide an entry point into the tools, methods and resources under development by the Model Sharing Working Group for applicable sets of models.
My $0.02.
Gary
On Wed, Jan 18, 2012 at 1:27 PM, hsauro <hsauro@u.washington.edu> wrote:
To add to Jim's comment on Antimony. This project was started to exercise the new modular proposal in SBML which is finally gaining some traction. The Antimony scripting language itself was conceived back in 2000 when I started thinking about a modular version of the Jarnac scripting language although Lucian has continued to enhance the specification since then. It wasn't until we got some grant support and managed to hire Lucian that any progress was made on the project.
One of the nice things about Antimony is that is comes as a library which means that *any* application can use Antimony and implement the scripting language in their tool. This is how the JSim team were able to rapidly incorporate Antimony scripting in JSim. This comes back to another important point, I am a very big proponent of software libraries and reuse of code. We for example have developed a number of portable libraries including, Antimony, a Stoichiometric analysis library, roadRunner our compliant SBML simulator, a new and simplified API to COPASI, a simple API for libSBML and others under development (eg the synthetic biology open language library). All our libraries are written in C/C++ or other compiled languages (eg not Java) which means they are cross-platform AND cross-language (The later is very important). I emphasize these points because if, as a community, we want to start down the road to modular model building then not only must we agree on a standard format but we must also develop software libraries that allow other developers to incorporate the standards in their tools otherwise the standard is not likely to spread beyond the single group that developed it.
Herbert Sauro
On 1/18/2012 11:03 AM, james bower wrote:
we are putting together a manuscript at the moment on this subject for submission probably to PLoS. That might foster some more discussion.
Also, I have organized a workshop at this coming CNS meeting in July, where we will dig more into the details of the value and structure of the model taxonomy.
I am also currently writing a chapter for the book I am editing on 20 years of computational neuroscience which will also consider some of these issues.
Jim
On Jan 18, 2012, at 12:02 PM, Peng, Grace (NIH/NIBIB) [E] wrote:
Hello Everyone,
Happy New Year!
Very intrigued by the email thread.
Jim Bassingthwaighte – is this discussion going in the direction of your original proposal to have an Automated Modular Construction of Models WG?
Gary An and Jim Bower – it seems you are in mutual agreement on higher level issues, but not sure where this is going. Are there specific outcomes/discussions you’d like to further flesh out through the MSM Consortium on this topic? I think this thread or at least part of it should be posted somewhere on the wiki. Perhaps in the general discussion page,http://www.imagwiki.nibib.nih.gov/mediawiki/index.php?title=General_Consortium_Discussions, or perhaps within the current Model Sharing WG wiki page, http://www.imagwiki.nibib.nih.gov/mediawiki/index.php?title=Model_Sharing_Working_Group. (I will add Herbert Sauro and Peter Hunter, WG leads for Model Sharing, on this email.)
Let me try to get some sort of consensus from those on this email. Do we want to create new WG? If yes – what are the goals and objectives? If no – where shall we post this discussion on the wiki? (You are welcome to use the voting buttons in the toolbar above)
Thanks! Grace
From: Gary An [2] on Thursday Dec 22 2011 at 14:09
Dear Jim,
Thanks for the thoughtful response and perspectives. In particular I appreciate your experiences coming from computational neuroscience, which in my opinion has been and remains out front in the integration of wet and dry lab approaches in the study of a biological process.
I am in absolute agreement about the disruptive capacity of modeling and the importance of providing this disruption. If I may attempt an interpretation of your comments on the tie between an experimentalist's "just-so" stories and their particular methods or model (wet) system, I would describe this as excessive reliance on empiricism, and treating nearly any particular biological system as a near "special case;" i.e. "my system is different because..." As such, there is no current role of theory in biology, and reflectively manifest as the inability of most biologists to think abstractly. To me, getting bioscience to scale requires identifying some pathway towards generating theories, and, to me, computational modeling is at least one formal step towards that capability. Once you have computational representations of biological stories, now we can look to potential transformations and mappings that point towards more fundamental laws.
So, given this goal, what operational strategies can be employed? One strategy, as I believe manifests in computational neuroscience, is the discovery of a sufficiently powerful mathematical abstraction (the Hodgkin-Huxley equation) that is able to both map to generative physically-grounded processes (i.e. what a neuro cell does) and larger scale biological organizational structures (i.e. what more than one neuro cell does with other neuro cells), and is additionally subject to experimental verification/validation. In this case, enough foundational "trust" has been engendered by the mathematical model so that non-computationalists will "buy" a story proposed purely through theoretical investigations. This is the physics model, where there is a sufficiently rich iteration between theory and experiment. But as you noted, even in an area as blessed with a core mathematical abstraction, 60 years later there are still a lot of teeth to be pulled.
So, alternatively, what if you greatly enhanced biological story telling, not only by providing a means for experimentalists to cut and paste paragraphs of previously existing stories (i.e. the use of model repositories) but introducing the capability to present their stories in new, novel and different media (to them)? This last point is important because unless there is a great deal of flexibility in the expressiveness of the story-telling format there will be a nagging doubt in the experimentalist's mind that somehow something has been lost in translation. Granted that this doubt may not ever go away, but here I think the strategy is to target those objections directly by allowing flexibility. Optimistically, if an experimentalist realizes that their particular story/hypothesis does not actually map to real world observations, or is not internally consistent, then they would consider this a failure of conceptual model verification and change/modify/evolve their conceptual model. Pessimistically, even if they were not to acknowledge a limitation/insufficiency of a particular hypothesis structure, the transparency offered through formal representation would make this evident to others viewing the "model," and at least provide a formal basis for comparison and analysis. To go back to my comment in a previous paragraph, the goal is to get onto a path towards theory, and facilitating a range of computational representations is the first step towards identifying useful formalisms. I guess my point is that while a large number of generated computational models generated from experimentalist's stories may be "just-so," just getting them into such a formal form, and forcing them to make epistemologically vacant assumptions shifts the landscape of discourse that will allow the introduction of mathematical rigor.
My impression is that we have very similar concepts regarding the intersection of wet and dry experiments, and the challenges ahead with respect to percolation of M&S into the standard experimentalist's workflow. I agree with the need for standards, and see the current targets of the Model Standards/Sharing group as a critically necessary back-end framework for M&S experiments. However, I think there is a more front-end knowledge engineering aspect to biomedical computational M&S, manifest as pursuing Strategy 2 noted above, that also deserves attention and description.
Thanks for all this discussion, its very helpful for me to get feedback and the benefit of your perspective.
Regards, Gary
P.S. Also, just so you know, I am a self-taught computational scientist, so I have a great deal of empathy for the challenges of moving from biology to computational science.
On Thu, Dec 22, 2011 at 8:26 AM, james bower <bower@uthscsa.edu> wrote:
Gary,
Interesting conversation and points. I should say at the outset that I am actually trained as an experimentalist, as are the vast majority of neurobiologists, to this day with very little required experience with computational techniques, modeling or simulation. In fact, the vast majority of training programs in computational neuroscience are designed to give computer scientists, mathematicians, engineers, and physicists, enough information about the brain to be dangerous. :-)
In most universities, neurobiology graduate students might have at most one course in modeling, simulation and theory, most less than that, usually with only a few smatterings of Hodgkin and Huxley models in a regular introduction to neuroscience course. I am actually currently involved in organizing the 60th year anniversary of the publication of those original papers next summer - which, obviously, occurred a long time ago.
I point this out because, judging from the history of physics, for example, it is very likely that the structures built around modeling and simulation will forever change how experimental biological science is done. In other words, modeling and simulation, done properly is disruptive of traditional 'story telling' structures (the default communication mechanism for humans). In my view, this is one reason that core neuroscience has been so resistant to modeling techniques, and the reason that most modelers to date, basically use computational techniques as a different way to tell the stories generated and approved by experimentalists.
I actually just wrote about this in the context of having been asked to write an article on "the computational structure of the cerebellum" for a new handbook on the cerebellum being produced by Springer, as if, somehow, the computational structure of the cerebellum was just another category of inquiry - standing side by side detailed descriptions of the molecular mechanisms underlying synaptic plasticity in parallel fibers. The article ended up being something of an epistemological diatribe on the state of modeling and theory in the cerebellum - as I said, which is dominated by efforts to tell the stories experimentalists approve of and not as a mechanism for discovery or even a mechanism for capturing in a common form (math) what we do and don't know. In fact, the central point of the article is that all story telling models to date assume (as do the vast majority of experimentalists) that there is a particular relationship between two neuronal components, for which, in fact, there is no experimental evidence. The situation is epistemologically equivalent to the declaration that the sun circles the earth (which would seem obvious and certainly religiously convenient) and then the construction of models based on that story telling assumption.
Anyway, the point is that, while the reality is that the vast majority of practitioners in biology are experimentalists - with their own structures (In my view all too often these days organized around the particular methods they use in their laboratories), I myself don't see any way around the need (and even desire) for modeling and simulation to disrupt a lot of that structure. Given the power held by experimentalists, and their general sensitivity to the perceived (often intuitively only) threat of modeling and simulation to the way they do business, this is going to be a complex and difficult process. However, if we don't organize ourselves and our science in some rational way - we run the risk of being accused of the same sins of the experimental world.
So, I think we basically agree, expect that having lived in the crack between experimental and computational science for many years, I am perhaps less sympathetic to the foibles of either side - and believe that it is important to realize that this effort we have undertaken (and IMAG has undertaken) IS fundamentally disruptive - It will also require that our students be educated in a very different way -- part of the reason we are focused on new forms of publication so that our students can educate themselves. :-)
Finally, as your comment I think suggests, one of the more interesting uses of the taxonomy we are proposing to develop is to re-organize our thinking about experimental techniques - in Physics, it is not possible to get funded for an experimental study without reference to a theoretical base - that requirement means that experimental results are, at the outset, linked to the computational structure which is the persistent form of knowledge and explanation. I have many times suggested at NIH that they could forever change biological science by mandating that every NIH grant include a section on the relationship of the work to existing theory. Hope springs eternal. :-)
On Dec 21, 2011, at 10:27 PM, Gary An wrote:
Thanks to all for the information and references, its all quite interesting. I'm sorry I didn't get a chance to see the talk behind Jim Bowers' slides; it'd be nice to hear more detail on how the different tiers/scales were decided upon, particularly given the potential semantic ambiguity concerning things like "network" and "system," and that "behavior" could be in some ways considered orthogonal to the other labels. All of which reinforces the need for taxonomies as mentioned, but also to draw upon the experience of the ontology people that there needs to be a fair amount of non-rigidity concerning how to allow people/researchers to choose to express themselves (hence the multiplicity of ontologies in BioPortal). The Genesis and CBI information in the papers is also interesting, and reminds me a bit of the HLA developed within the M&S community (SISO) to present and potentially enforce standards for M&S. My understanding is that this was initially a DoD initiative to improve interoperability between its systems, something that I know has not functioned as well as they would have liked.
As related in some prior emails, my personal interests lie in the Steps 1-4 of the User Workflow related in the CBI-architecture paper, and the steps proximal to the capabilities being developed at the University of Washington, specifically: how can we facilitate and augment the ability of a non-computer scientist researcher to begin to input their knowledge into these systems (with their standards and protocols)? I might consider the analogy of how a researcher studying a particular pathway would chose between Q-PCR, a microarray, RNA-seq, etc: the choice of what method used would be defined by the requirements of the research question at hand. So should it be regarding the type of computational modeling method. I can see some of this in Jim Bowers' presentation with movement back and across the scales and levels of resolution. An experienced computational researcher would be able to navigate this space, but a computationally naive researcher would not. However, I believe that what makes the computational researcher able to do this navigation is a knowledge base that offers the possibility of representing aspects of that decision/modeling process with algorithms, and thus a target for "augmentation." Such a planning task would necessarily need fixed targets (i.e. standards for simulation creation and simulated experiments), but also the flexibility to account for varied types of expressiveness based on a particular researcher's task. The modular representation of classes of expression, and how they might need to be articulated, are, to me, open questions that can form a very useful complement to the standards and protocol development/model sharing work. In my opinion, this is a huge component of scaling the application of M&S across the general biomedical research community and being able to utilize the knowledge sharing/representation/dissemination potential offerred by the model sharing/standards work.
Is this topic of interest to you? I'm sorry, but I have struggled with how to articulate this need and any feedback any of you can provide would be a great help.
Regards, Gary
On Wed, 21 Dec 2011, Hugo Cornelis wrote:
Date: Wed, 21 Dec 2011 22:41:30 +0100 From: Hugo Cornelis <hugo.cornelis@gmail.com>
Dear all,
In attachment you find the two papers that Jim Bower mentioned.
They have been formally accepted by PlosOne and have been assigned a DOI. However they are still in the production stage and I hope that they will be online before the new year.
The overall aim of the efforts described in these papers is to come to a federated and collaborative software platform for model and knowledge sharing in computational neuroscience.
The first paper, cbi-architecture.pdf, explains the philosophical workflow that we have followed during the software development of the GENESIS 3 simulator and the CBI meta-architecture.
The second paper, cbi-scripting.pdf, builds further on the principles of the CBI architecture of the first paper and explains the scripting capabilities of the GENESIS simulator and implications for interoperability with other software modules and libraries.
Building further on this we also recently demonstrated preliminary results of multiscale modeling capabilities at the CNS meeting in Stockholm last summer. We are currently writing a third paper that includes a description of these efforts which we hope to submit soon.
Hugo
On Wed, 21 Dec 2011, james bower wrote:
Date: Wed, 21 Dec 2011 14:00:55 -0600 From: james bower <bower@uthscsa.edu>
hi all, perhaps it is easier just to reference a couple of papers just published that describes the structure of GENESIS Version 3.0, specifically designed on modular principles for model construction and, importantly, model evaluation:
10.1371/journal.pone.0029018: "Python as a Federation Tool for GENESIS
3.0." Hugo Cornelis, Armando L. Rodriguez, Allan D. Coop, James M.
Bower.
10.1371/journal.pone.0028956: "A Federated Design for a
Neurobiological Simulation Engine: The CBI federated software
architecture." Hugo Cornelis, Allan D. Coop, James M. Bower.
I have also included Hugo Cornelis, the principle software designer for G-3
on this discussion as well.
as we all know, a complex but important task.
I personally believe (and we are working in this direction) that until these
methods are used to actually publish models for their own sake, propagation
at least in academia, will be slow.
This is 'sharing', "reproducibility" and in particular evaluation. also
linked to the essential academic question of attribution and model
evolution.
We are currently writing an article on these subjects and how they are
linked to scientific publication.
Jim
On Dec 21, 2011, at 1:33 PM, James B. Bassingthwaighte wrote:
Hi Grace, thanks for bringing me in, as this is an area of prime interest.
We have at UW allready 3 independnet methods of automated (or augmented if
you will) modular model construction. All of them are based on the modules
being preconstructed and archived in some standardized form.
The earliest one is that of GAry Raymond (presented at the Montreal SIAM mtg. It is based on using a scrtipting language to combine modules and to combine equations containingthe same variables. The scripting language lists to names of the modules to be combined. This works because the modules are all written in JSim's MML, and equation based language.
The second is MAx Neal's ontology based combining of modules. Max annotates modules via a set of ontologies to identify terms. He has taken modules form SBML and CellML, "conditioned" them by the annotation and them combined them iinto multimodular models that are executed in JSim's MML.
The third is Lucian Smith's combining process using Antimony. This started as being a vehicle for SBML-based modules, but now incorporates JSim's XMML or CellML. The execution can use JSim or other simulation platforms that accept SBML or CellML.
Since JSim now can translate from MML into either CellML or SBML, this means the constructed models can be delivered into any of the three model repositories, the phsyiome.org site using JSim's XMML, or Auckland's CellML, or Biomodel's SBML
Other groups elsewhere are developing other methods. The community is now well defined or well-connected. There is a lot of development in the defense industry, reflected in part in publications like "Simulation"
and the meetings of Simulation Councils. This group has a lot to teach us.
A working group on modular construction needs to be in close touch with working groups on standards, on sharing, and reproducibility. Maybe these should be one group, with subgroups.
What do you think?
jim
On Tue, 20 Dec 2011, Peng, Grace (NIH/NIBIB) [E] wrote:
Hi Gary and Andrew,
Following up on Jim Bower's MSM presentation, I'm adding him and Jim Bassingthwaighte to this email thread to weigh in their thoughts (read from bottom). Remember, if a WG is to formed, we need to have some goals and objectives for this group.
Thanks! Grace
From: Gary An [3] on Thursday, December 08, 2011 12:02 PM
Hi Grace,
Thanks for the email. My opinion (as reflected in some of the comments in the prior emails in this thread) is that what we are talking about is a fundamentally different problem than the model sharing/standards one, though very closely tied to it in the big picture of furthering computational research. In particular I would point to the Myths on the link you sent: 1) Models are easily reproducible and 2) Extracting working models is trivial. From an experimentalist standpoint what these boil down to are being able to represent the "Methods" section of any computational modeling paper in a fashion analagous to those found in a wet lab paper. The alternative question I/we are interested in is how can a researcher know what methods to use; knowing the answer to this is currently a profession in of itself (i.e. become a computational scientist). So clearly there must be a target set of Methods for a researcher to select from, but it is unreasonable to expect them to become experts across that entire set of methods in order to decide what might be most effective for them. Furthermore, you are likely stifling innovation by requiring that interested parties find expressions that fit within a limited set of vetted formats. Don't get me wrong: that vetting is very important, but it also presupposes a set of methods with constrained expressiveness; there are a whole set of other accepted M+S methods that may not yet have percolated into the biomedical arena, and, critically, the means to link different methods together. So, to some degree, the two projects also need to progress being agnostic to each other's specifics: the goal of Augmenting Model Construction is based on maximizing expressiveness, the goal of Model Standards/Sharing is refining descriptions of manifestations of particular expressions. A WG on Augmenting Model Construction will focus on identifying the range of methods that hypotheses could plausibly be instantiated and facilitate that instantiation, and in so doing identify new targets for the MSS WG. In turn, those recognized targets from AMC that are currently being vetted by the MSS would utilize those resources to make sure that within those bounds computational research can be held to the same standards as found in wet lab work. A robust solution to the big picture to a great degree requires that the innards of both goals not limit the the output of the other: a "meta-modular" approach if you will.
Sorry to be long-winded about this. Let me know what you think.
Regards, Gary
On Thu, Dec 8, 2011 at 10:07 AM, Peng, Grace (NIH/NIBIB) [E]
<penggr@mail.nih.gov<mailto:penggr@mail.nih.gov>> wrote:
Gary and Andrew,
This is great! I really appreciate this new angle for model sharing - to impact the greater research community that are new to using models. Thank you both for your thoughts. I should have included the current ModelSharing WG page for you to consider as well, http://www.imagwiki.nibib.nih.gov/mediawiki/index.php?title=Model_Sharing_Working_Group. Remember WG 10 (the page you were looking at previously is archived and it has evolved to this current Model Sharing WG). Let me know if the discussions you would like to hold can be included in the current Model Sharing Working Group, or if it should reside in its separate WG.
Thanks, Grace
From: Gary An [4] on Wednesday, December 07, 2011 6:51 PM
Hi Andrew,
Thanks for your great comments. I'm in complete agreement about the big picture and the link to WG10; the standards and model repository they are working on are reasonable targets for the things I described and I definitely see a synergistic relationship between the working groups. For instance, Jim Bassingthwaite would be a great bridging person. Other possible speakers for a session could come from the modeling & simulation community where this idea of model/ontology-aided design is more prevalent. Another related group are systems engineers that deal with meta-engineering tasks. I think we could come up with a good group and point to clear intersection points with the other working groups.
Regards, Gary
On Dec 7, 2011 4:34 PM, "Andrew McCulloch" <amcculloch@eng.ucsd.edu<mailto:amcculloch@eng.ucsd.edu>> wrote:
Hi Grace,
It definitely is not my area of expertise but I agree with Gary that the kinds of "automation" that he is describing are both necessary and currently not being addressed by the Model/Sharing and Standards WG. On the other hand, in the big picture, I think these topics are closely related. The markup languages and model sharing repositories provide a framework for formalizing and sharing the model descriptions but they don't actually facilitate the process of building models systematically. I think that the same people in WG 10 will be the ones interested in these issues. If Gary can suggest speakers/contributors (himself included) who could educate us on available approaches and technologies, I can help to "market" and moderate these presentations in a way that should be understandable to the typical MSM member who, like me, is probably not very familiar with these methods and what is possible now.
best regards, Andrew
On Dec 7, 2011, at 8:19 AM, Gary An wrote:
Hi all,
Thanks for the email Grace, and the links. Looking through the Modeling Sharing/Standards WG wiki is very useful and substantiates my particular interest in the current working group (which I might modify as "Semi-automated" or preferably "Augmented" Modular Construction of MSMs), which is in facilitating the ability of non-modelers to construct and use models. To my reading, the MSS WG is primarily working on what to do with models that have already been developed (and perhaps even vetted); meta-data descriptions concerning the ontology of the model plus how those models were tested/used/verified/validated, and perhaps some inkling as to how such an existing knowledge base of these models could be integrated (though this is less clear). To my reading, all this presupposes that such models exist, and therefore does not address what happens "proximal" to the model; i.e. how a particular researcher's hypothesis structure can be possibly mapped to existing modeling methods or, potentially, certain canonical models such as held in the SBML/CellML libraries. In short, if one is not already a mathematical biologist/systems biologist/computational researcher, how does a "standard" biomedical researcher start? How can their focus of interest be parsed into mathematical/computational methods? Which methods are most suitable to the questions being asked, and data available to construct the model? What, for them, constitutes a "module?" My opinion is that because it is impossible to anticipate the answers to these questions that any process involved in increasing the use of computational models must keep the human in the loop, with the operational goal of presenting a set of plausible mappings between the researcher's interest and various types and combinations and types of modeling methods from which researchers would choose. The targets of those mappings would be compliant to the MSS WG output when applicable, but that is likely to represent only a subset of the range of expressiveness desired/possible as biomedical researchers utilize computational approaches. My own research is currently focused on using AI approaches to help generate these mappings from both free text and bioinformatic analysis, and am certainly interested in seeing if this has a wider appeal.
Thoughts? Gary