2017-09-07 Meeting notes
Date
Attendees
- Pete Rivett
- jacobus.geluk@gmail.com [X]
- Jim Logan (Unlicensed)
- Elie Abi-Lahoud (Unlicensed)
- Mike Bennett
- Randy Coleman (Unlicensed)
- Cory Casanave
- Dean Allemang
- Bobbin Teegarden (Unlicensed)
Agenda
1) Where we are on our road map.
2) Open Action Items
3) JIRA Issues Review - https://jira.edmcouncil.org/projects/RDFKIT/issues
4) Todays content discussion.
5) For next week.
Proceedings:
20170907 FIBO FPT RDF ToolKit
Action updates
DA and JG met - the problem with the NoMagic headless thing is solved.
Lots of glossary work. DA - given suggestion that header should be stationary - more work than expected. Also changing the order of the model driven v human definition. Stuff that was done by hand now in generator. Some fixes needed e.g. scrolling. Progressing. DA using the NoMagic templating language for this.
PR: Whether to include properties in the glossary? DA there is a flag for that. Can turn it on. PR should use hyperlinks where words appear. JL the thing allows for that.
Look at DA demo of Glossary. DA demonstrates what it means for the header to stay stationary. Also adds a widget to the search that does some kind of auto complete. Want the navigator to also stay on screen the whole time. Currently navigates you away. There is a macro to do the hyperlinks. JL: if DA can share this with NoMagic they can get whatever is needed into SP11. DA will first need to segregate FIBO-specific from generic changes.
Next action: All file. This is renaming the About files to All files. No progress.
Next: what Elisa was talking about re CFTC. EK wants to share that wrt things like code lists (Individuals) that we develop based on e.g. country codes, cc codes etc.; there may be large numbers of individuals in some of these. For LCC, being submitted at New Orleans, PR created scripts and approach to auto generate all of the Individuals needed for Country Codes and Country Sub-divisions for this submission. The fact we can generate these means fewer errors and continual updating when source org updates them. With LCC we are submitting the scripts and the instructions for running them. PR these are very specific to these data sources so are not re-usable. EK the approach itself is re-usable. We might be able to document this and use it for other FIBO users who want to upload lots of data from their sources and import into FIBO. This can be used in BE, FBC and others over the coming weeks. The ontologies that define the codes have changed so there are changes coming to FIBO and we can use the new approach for those. We can reference the LCC work. This would be in guidelines to users of FIBO. This has been used to handle over 5000 maybe even 10k individuals. What does it import from? For Country Codes, we downloaded from ISO via subscription (OMG liaison and subscription). This was either CSV or XML (both are available). We used the XML file from ISO. This was mapped using an XSLT transform. Also, hard coded header material for the ontology.
PR: FLT Policy question - to what extent in FIBO do we want to include reference data? And if so, how do we manage that reference data? Also scalability. The Linked Data Fragments approach may be an option for that. Also how do we mint the URIs for these. Should they have the same base as the ontologies or should we have another policy for minting URIs for the individuals. Country codes and names change quite frequently. NAICS codes change on a regular schedule. Others change more frequently. There are 219 ontology files containing country subdivision individuals. Granularity is mixed. Have started to introduce subdivision types now. This is not just terminology as the above would suggest - we have actually used ontological definitions of sub-division concepts and then identified what words different countries use for that.
PR another policy question is how many of these country subdivisions needed? PR 3 policy questions: 1. What is our intended scope? 2. How do we mint the IRIs? 3. What do we expect our users to do with these? These are FLT questions not FPT. PR: Once we understand the policy we can use these techniques to implement them. for the FPT - wanted to let us know there is an approach for this. May be relevant for interest rates. There is a single FpML file available that has various interest rates by currency listed in it. These are of interest (sic) to the CFTC for understanding IR for IR Swaps. These do not change often. This will be discussed at the FLT.
Next subject: Release Notes. Have we reached a conclusion? DA - JG wasn't sure there was an API to implement PR's suggestion from last meeting, whereby we craft prose for release note at the point we action the pull request. PR found a way to do that - see Google link in his email. Should be available from GitHub.
JG: the example that PR mailed (API that gives you all the comments), not sure we want all that in the release notes. PR if the 1st comment is the one associated with making the pull request, then that's the one. This requires some discipline. JG once you did merge that thing into Master or Pink and 2 months later you set a tag in that branch to publish a new version e.g. Quarterly release, then a process kicks off to generate all changes between that tag and the previous tag - how do you get all the pull requests involved in that. PR was assuming that as soon as the Pull Req is accepted, we do a publish into Dev. Generate a segment of release notes. Then for the 3 monthly release we just assemble all those from there.
EK has questions - . for each individual change, documenting what JIRA corresponds to the description of change. This gives no insight into impact of the change. So this does not give human-readable account of what changed, DA there is some release-note level description from the person who made the change - hence the reason we chose pull request since this is the time when the person believes they have a solution against a given JIRA issue and are putting it forward for review and release. So that is the moment when you write up the high level info, impact etc. This was the rationale.
PR assumes we would have links to the JIRA but would not need to have detail. People can dig further via the JIRA or can do a diff. JG need full traceability/provenance of all the changes, which JIRAs, who approved, what discussion etc. and what lines of the ontology text were changed specifically for that JIRA.
EK still think that what she does in pull requests is more granular e.g. the pull request rolls up x number of changes - still too low level for human consumable release notes. There is often but not always 1:1 relation between JIRA issue and pull request.
JG we work on JIRA issue in a feature branch. Can also compress e.g. 10 commits into one. EK this needs to be written up. PR: if what is currently in pull requests is in too much detail, then people now need to change from what they would have put before, to something compatible with the policy we are now enacting. JG we should do things as standard as possible, and this deviates from that. What is the standard way? JG: there are plugins e.g. in Jenkins that do this, based on git commits. With some clean-up e.g. "squashing" (what I called compressing above); OR you have a commit that just goes to JIRA and cluster IRA issues in one JIRA "version" which can be done there (we haven't ouselves). It's about collecing th list of commits nd the links to the JIRA issues and generating release notes from that. PR this sounds like a lot more effort. What is the simplest way for now? JG we should squash things anyway. Some changes are undone - would not see those in release notes. So we have to clean up anyway. Cleaning up git commits is part of this more mature discipline.
DW: What will we do on this for the Sept 30 release? JG we are not at that level of maturity yet so we must create release notes manually. DA anticipates this. So what is the way to do that, from which there is the smoothest transition to our future process.
JG doesn't know. Depends on what plugin we use. We should experiment in a side branch on that and review what outputs are acceptable. PR: Here's code to generate change log that can be customized. It has links to alternatives https://github.com/skywinder/github-changelog-generator For instance something publishes in the Jenkins UI. PR: The above link lets you include pull requests and customize what gets included. There is also a list of alternatives.
DA we should look at this and similar things. Need to do by hand but not incur unmanageable technical debt in how we do this. Omar and Dean should read through this. Others if they have time. Get a sense of how it works and align our manual process to be close to this.
ACTION: DA and Omar investigate https://github.com/skywinder/github-changelog-generator Get a sense of how it works and align our manual process to be close to this
Back to actions list: Elie EAL: 1st item is done. Downloaded the file, went through the code and read the comments. Looks like the initial submission in March was almost complete. EAL now needs to document on the wiki what needs to go in the ontology and give them definitions. Then give MB and JG and others something Things will have instances as URIs, Modules, Domains etc. EAL needs to arrange the TBox. Will engage everyone via the wiki over the next 2 weeks. This is pre-requisite to Task 2 on this list.
PR we previously decided to segregate the source artifacts and the published artifacts, with mapping between them. EAL source v publish should be state of what we have. Should discuss metadata similar to the OMG metadata spec, at the level of ontology / file, concept etc. MB this will support the decisions we need to make but the decisions don't make themselves.
Can we turn MB's work into a list of things we need to decide about? PR we also need to standardize our terminology e.g. what do we mean by Product Line versus Product and so on.
DW what does EAL need? PR he needs to see us standardize our terminology. MB he needs to see the outcome of the decisions
PR the BDTM needs to be updated, still refers to colors. DW is working on that.
Back the Agenda. VOWL - JG did give the credentials to the person in India. Will work on that together
Landing pages action. Maybe one of 2 things. This is the version selection landing page. This is DA job for this time around? JG we are skipping page, go straight to Master version so you don't have the option to select another branch. DA every time a pull request is actioned all the machinery comes into action so this should all go there without problems. Only would need to add some way to have an option for the non Master versions, and figure out the user experience for this. This has changed: every branch includes both Prod and Dev content. This means branch from a GitHub PoV. Prod and Dev are branches nor tags - they use the metadata annotations from Elie's file. DA knows what to do.
52 OK action: on track. No issues with this task
Round tripping - MB reports. Sounds OK to JL. Sounds OK to DA.
Architecture picture = JG Nice to have
SKOS flavor item is closed.
ISO 20022 naming conventions - not done (MB)
Verify origin action (MB) done
Loans progress - Good progress within FND
DW talked to EK, working through her list with DA.
DW need words to publicize what we are going to release. Includes words for what has changed. What will the new spec look like.
Will we work from what we did before. Will we have a professional looking landing page? DA at some point we need someone who is good at web design to overhaul that. DW once the new EDMC web page is up we can use some of the resources who developed that, to enhance our FIBO landing pages. Meanwhile we will add some new features to the current page. Also fix up some issues that e.g. john Gemski pointed out. DA has already got John G's issue on the JIRA, expect to complete this week.
Others e.g. when David (N?) had an issue getting to the SKOS pointer. DA aware of this.
Next call Tuesday 9am EDT.
DW - Mike Atkin is keen to demonstrate the build process to John Bottega, Mike Meriton, McQueen . This will be in October.
Decisions:
Action items
- Dean Allemang Omar Khan (Unlicensed). investigate https://github.com/skywinder/github-changelog-generator Get a sense of how it works and align our manual process to be close to this