Nordic-RSE get-together online event, Nov 30 - Dec 2, 2020
Are you more interested in the software and technology in research than
making as many papers as possible? Do you wish you could share your
interest with others who feel the same as you? You aren't alone, and
the name for this feeling is being interested in
Research Software Engineering
(RSE), and this conference is dedicated to it.
Our main Nordic-RSE conference has been postponed due to COVID-19 situation but you are all invited to our first
online get-together event. We plan on an afternoon keynote
session on Nov 30th, followed by days of talks on Dec 1-2 (mornings
for workshops and talks, afternoons for discussions). Topics include
both the experience of being a RSE and tech tools useful for research.
All participants, speakers, organisers, and volunteers at this event are
required to conform to the following
code of conduct.
The importance of Research Software Engineering as a role, a discipline and a
community is becoming more and more widely recognised because it is essential
for harnessing the opportunities and of modern, computational research. Alys
Brett is head of the Software Engineering Group at the UK Atomic Energy
Authority and founding president of the Society for Research Software
Engineering. She has just handed over the leadership of the Society after
several years in that role. In this talk she will share the experience from the
UK of building recognition for the RSE role and developing groups, career
structures and communities, and reflect on where we are now with this
international movement.
Can a research software engineer be also a "research data engineer" or do you think we will need a new "RDE" role?
I think RSEs often need to do a bit of everything so in some projects they will be the data engineer and will probably need to be able to navigate the basics and research the rest. There definitely are distinct roles relating to research data engineering and management though and we should promote recognition of and collaboration with these complementary roles too. I have an RSE team and a research data engineering team in my group and there is a lot of overlap in skills but some greater emphasis on devops and data management over numerical modelling and statistical methods in the data-systems-focussed team.
Related to the above, how does RSE relate to many other 'support staff' kind of role, even if software is not their main focus?
In the UK, some RSE groups are part of Research IT services departments and some are within academic departments. Similarly, individuals will have different kinds of contract. In some places the distinction between researchers and support staff is very rigid and limits what you can do, and in others it is more flexible. We have found there is no one size fits all approach to how to make it work which is one of the reasons starting such a group is hard as you have to get into the specific way finances, contracts, HR etc work in your institution. The words "support staff" can be a bit controversial, partly because of the hierarchical culture in research (which is a problem in itself). I prefer to talk about "specialist roles" and "professional collaborators/consultants" in various fields to set the expectation that RSEs and researchers are collaborating as equals with complementary skills. There can definitely be a similar model in non-software but research-related specialist roles and common cause in developing the culture and the structures to support those careers and skills.
What might the value be for an RSE group to hire a software engineer that has not worked with researchers before?
[name=a] I think there is value in there. As an RSE you naturally tend to split your time between doing, teaching and learning. Having a dedicated Software Engineer with experience churning out good quality code and familiar with the necessary concepts can be very useful. I've generally had people like that close to me and it's useful to bring them in to give talks, help with course material, workshops and so on. They also get something out of it - experience in working with researchers.
For the researches who were not exposed to software engineering in a formal way, there are very little opportunities to get the best practices. There are no university course for such things either. How do we fill this vaccume?
Software Carpentry Workshops aim to introduce "basic lab skills for research computing" in a 2-day workshop (eg programming, version control and Unix shell)
CodeRefinery! More advanced for practicing researchers.
Increasingly part of Researcher training programmes. RSE groups in UK often run training and some teach parts of undergrad and postgrad courses
On the job... Richard is covering this well :-) pair programming and informal interaction with people who have the skills along with workshops/online courses etc, but need the culture in research to value and support this
Whats the relationship between RSE and the more narrow "Bioinformatician" role that has gotten more traction and recognition over the last couple of years?
I think about it in terms of overlapping communities, so bioinformatics is a possible specialism for an RSE and some (most?) bioinformaticians will regard themselves as RSEs. I have heard the term "pet bioinformatician" used by people who were the sole person doing the programming for a group and feeling a bit isolated/unsupported so I think they can benefit from a wider community and the strong overlaps with methods and tools used in other research fields.
Is there a "career path" for RSE in UK now?
It's not a completely solved problem, but the larger RSE groups will often have RSEs at multiple grades so there is scope for progression. In my group there are four levels: graduate RSE, RSE, Senior RSE and team/group leaders which are the same as levels in research groups. For RSEs in research roles
Depends very much on the group. For example, I was employed as an RSE but to get promotion I was treated like a PostDoc and required to publish papers. I think it needs a department head who understand the problem and who they have, otherwise they lose them.
In my department people (i.e., professors, researchers, managers, etc.) do not understand the difference between RSE, post-doc or teaching-assistant: they treat everybody in the same way (although we all have different salaries, duties, etc.) and expect all of them to support them, their research and their teaching, in the same way. How can I/we make them understand what RSEs are?
Influencing professors is possibly the hardest part of this whole effort. I don't think there is a magic answer but when RSEs are in demand from multiple groups they are in quite a strong position to explain how they can best use their skills to collaborate and to prioritise projects where they can make the most effective contribution. Some groups have written down criteria for accepting projects that include the ability of the research group to work with them effectively and the opportunity to transfer skills to the researchers. Also, they sometimes listen more when they hear it from outside so getting talks (or a couple of slides in a talk) about what RSE is into big domain conferences can be good.
Research Software Engineering: it's obviously about software, right?
It could be, but I believe we can adopt a broader viewpoint. We have
all heard countless times about the systematic factors affecting
inequality in science, but how much does access to computing, or
computing training, contribute to this? In this talk, I will first
outline some factors contributing to inequality of computing which I
have noticed after years of supporting researchers. I will relate
this to the services which can be provided by RSEs, and present a
vision for addressing this by developing our own skills and promoting
RSE services to our institutions.
Perhaps getting sympathy from more traditional professionals in academia might not be easy because the RSE career path is not nearly (yet?) as "hard-coded" as that for, say, a professor.
I agree with this. As an RSE, will I just need to leave academia sooner or later as all other "non professors" or do I just become the old IT staff member? insert steve buscemi meme "hello fellow kids"
[name=a] A lot of people do leave, like me, but I don't think that's necessarily a problem. There needs to be people industry to work with RSEs and a goal I'd like to see is a good industry/RSE collaboration with RSEs being the people speaking the common language in partnerships. Quick plug for my talk on Wednesday containing some information on both sides of industry partnerships!
Apprenticeship can also happen by reading how others work (read their code, watch their code review, watch the tools they use)
[name=speaker] yes! I've learned plenty by reading the right blogs, for example.
Suggestion: using equality of opportunity for competent knowledge/skill rather than simply saying equality. e.g. we want to achieve equality of opportunity for everyone to be able to acquire knowledge rather than equalizing everyone's knowledge in a certain field.
[name=speaker] Yes, that's correct. We can't make the same outcome, but hopefully people have the same opportunity, without implicit prerequisites that some people don't have.
What is your take on implications of cultural differences on the topic of "supporting equality of opportunity". for example, many of your ideas seem to be easier to apply in a collectivist society as opposed to an individualist society which probably most of the Nordic could be categorized as.
[name=speaker] This is not my speciality, but I think most of the points I make come about because we are very individualist and assume that everyone can make their own way. That breaks down when not everyone has the network to do that. Perhaps you could even say, those who think they succeed as individualists often happen to have these implicit networks that make that possible, yet doesn't get recognized.
So true! I learned so much when sitting down with a RSE or with a software engineer in their offices. We had really good sessions. The software engineer did not have a background in physics, I did not have a background in computer science. But I think we made a pretty good team learning from each other. No chance to do this in the open space where I had my office.
Indeed. Learned so much by somebody telling/showing me: "hey look at this cool thing I found out"
Re computer skills, I also notice some researchers seem shy to share the code they wrote because they think it's "sloppy". I always try to remind them that programming is a secondary skill to them (as it was for many RSEs)!
Important to consider code a 'group ownership', to try and take the personal pain out of showing code. Code reviews are great to learn this kind of separation of person from code. You critique the code not the author!
Suggestion for a substitude to the "academic vs vocational skills": actionable vs non-actionable skills. The latter creates less stereotypical or stigmatic bias against the academia or likewise against the industry/practice. moreover, actionable and none-actionable skills could occur on both sides, it's just that academia is more prune to it since there is more room for theoretical material.
[name=speaker] Thanks, nice idea. We'll have to make sure that they terms are also clear, without other elaboration, but this is a good start.
I do wonder if we should call ourself "engineers" if we don't really have the solid technical skils I associate with an Engineer.
At Lund University physicists etc are just hired as "research engineers" because they are not hired as scientists, postdocs, professors, but they have a permanent position. Here engineer seems to be just job description.
I like the idea of making services fairly available for making/contributing to equality. On the other hand, I have seen some cases that distributing time and efforts of RSEs to one project/researcher could be really time consuming and at some point, researchers/research projects need to buy-out such a service by RSEs. Then the "rich gets richer" happens again. How to tackle this type of problems?
[name=speaker] We can't solve everything, so there is a tiered system: some basic resources for everyone, long-term is paid. It's up to us to convince our funders to make the best balance.
What about publishing code, papers and credits? Should a RSE be included as an author in papers wher he(r) contributions are crucial for the result?
Only if it makes a difference to the RSE. Hopefully he has no pressure to publish. Maybe the RSE team/program should be attributed.
I strongly think that if code is fundamental to the results, i.e. if you're modelling some physical process, the author should be credited. After all, that person is contributing to the quality of the results.
I argue he/she should be included!
[name=speaker] When you realize there can be separate software authorship from paper authorship, there is more flexibility to do the right thing in each case. Is the RSE doing creative work about the science or the software? Is the software the science?
[name=w] This is a good point, maybe the CFF initiative can help with that.
I think some universities are still in the awkward situation that they do not acknowledge the importance of RSE. Would be good to find ways to highlight such importance with the help of the RSE network.
There are various sources of material about this online, and was proposed to be a topic of this event or the conference next year. Hopefully someone can link it here
[name=a] The UK-RSE community have been and are still struggling with this but it's improving a lot over time. Pointing Nordic Universities at the UK and demonstrating what has been happening there demonstrates what RSEs can bring, why they are required, and the path to follow.
I saw a brief note about gender balance. Like to point out that it is nice when everyone feel welcome and equally participating, not only male and female but also non-binary.
thank you for pointing this out. Indeed we need to improve this to create a welcoming environment for everybody.
threaded discussions (every topic is a thread). good for asynchronous work and remote work. (https://zulip.com/help/about-streams-and-topics and https://zulip.com/why-zulip/)
Plus, it's Open Source :smile:
:+1:
can be self-hosted
Also, messages are stored for longer (at least with a non-paid plan).
unsure about this but as open source or non-profit projects one can apply for a free premium plan (keeping entire history) which worked for us for a couple of chat instances
Slack only stores 10k messages, while there seems to be no limit in Zulip. From personal experience (for whatever that is worth), messages disappear much quicker in Slack than in Zulip (the Slack channels I'm in are also way less active than the Zulip channels).
Fair enough, after reading the responses and digging a bit more into the Zulip it seems there are number of essential features that makes the app more efficient to use compared to Slack. Still, there's a resistence and that's the fact that already most of the work is done on Slack and many people already are using it, it is not convenient to have some other apps as satellites around your main messaging app. Moreover, I thought for a community such as Nordic-RSE that is trying to attract more members and activity would probably makes more sense to use a more common messaging app, nevertheless, thanks for the responses and introducing Zulip with us all, I'll give it a shot ;)
good point about yet another tool/app.
little downside: on mobile phone it was less good/responsive than Slack few years ago when I tried last time. I am using it on computer browser always.
Wonder how many here already use Slack compared to Zulip
Don't forget MS Teams, which is the "official" one here at UiO! Just don't get me started on how much it lags behind Slack...
These days most know and use Slack and very few know and use Zulip. So it is still niche but I think this tool was a good choice for the CodeRefinery project.
How is Zulip integration with other apps (specifically Dropbox, Google Cloud and Todoist) when compared to Slack?
Here is an overview https://zulip.com/integrations/ but I have only tried GitHub integration so far
Thanks for the link. I see there's no Todoist yet. The others I mentioned are there, but that doesn't say much about how well that works.
For the reference and comparison of other options from g2.com
What are you expecting from this meeting?
See what others are doing around Nordics, what cool tools they are using and what types of problems they are solving.
Would be interesting to see what career paths exists in other groups / universities.
15:30 (CET)
Close
Tuesday, December 1st
9:00 (CET)
Welcome and Introduction to day’s schedule
9:05 (CET)
Lightning talks: Introducing groups
(Chairing this session: Naoe Tatara)
The EuroCC National Competence Center Sweden (ENCCS)
was established
on 1 September 2020 with funding from the Swedish Research Council
(https://www.vr.se/english.html), Vinnova (https://www.vinnova.se/)
and the EuroHPC Joined Undertaking
(https://eurohpc-ju.europa.eu/). ENCCS is one of the 33 national HPC
Competence centers across Europe.
The mission of ENCCS is to develop competence, knowledge and support
in Sweden to enable academic and industrial researchers and high
performance computing (HPC) users to take advantage of both
forthcoming (pre-)exascale EuroHPC resources as well as modern
artificial intelligence and high-performance data analytics (AI/HPDA)
methodologies.
ENCCS has research software engineers from different backgrounds who
are both training researchers through workshops and hackathons and
supporting selected research software to run on (pre-)exascale
systems. We also work with industry through the Research Institutes
Sweden (RISE) and offer support in writing
EuroHPC-JU systems access proposals.
Nordic countries are hosting several large scale scientific experimental facilities, including Photon and Neutron (PaN) reserach infrastructures, in particular MAX IV synchrotron laboratory and European Spallation Source (ESS), both situated in Lund. With the excellent particle accelerator source brightness and fast detectors enormous volumes of scientific data are produced. Almost a thousand of scientists annually are using these research infrastructures to conduct scientific experiments in relation to biology, chemistry, physics, material science and also geology or cultural heritage. In late 2018 several European PaN research infrastructures, including ESS, started a project called PaNOSC [1] and they were complemented a year later with the ExPaNDS [2] project at national PaN facilities, including MAX IV, within the European Open Science Cloud (EOSC) initiative. Both projects aim for expanding practices of scientific data management and analysis towards Open Science and FAIR data principles. Strategy, several scientific application cases, which should prototype the EOSC services for PaN users communities, and tools chosen can be briefly introduced giving an essence what can be the future scientific data service for the relevant communities.
In this short presentation I will discuss how we grew the CodeRefinery project
over the past 4 years and taught hundreds of students and researchers across
all disciplines in best practices in reproducible research software
engineering.
I will highlight how we transitioned from in-person workshops to online
training and the team effort which made it possible to scale the workshops to
almost 100 participants per event.
MXAimbot is a neural network based tool currently in development, designed to
relieve researchers of the task of manually and individually centering their
samples in synchrotron beamlines for macromolecular crystallography.
How does it do that?
It is a pretty simple CNN trained on a few thousand images from a camera
pointed at the loop which holds the samples. These images are annotated with
coordinates, height, and width.
Why?
Because the other two alternatives are
Manual centering by humans, which is boring and tedious and consumes researchers valuable time.
X-ray centering, which can cause radiation-damage the crystal.
The just released PRACE - Best Practice Guide for Modern processors (ARM
Kunpeng & THX2, Intel Skylake and AMD Rome) is just released. A short
introduction to the guide will be given. Topics cover architecture, programming
environment, tuning, performance libraries, performance and introduction to
European systems using these processors. A couple of hands on examples and
tricks to some of the tools in an optimal way will also be presented.
Writing an R package has helped me leave my comfort zone and level up my R
programming skills. The code I write as a researcher is mostly single-user and
single-use. Writing and publishing code meant for others has helped me break
old habits and acquire useful new software engineering skills. R has a
streamlined ecosystem for package development that supports understanding and
adhering to best practices. I will talk about the things I have learned while
writing my first R package, why I think writing a package should be a rite of
passage for any aspiring research software engineer, and why R is a great tool
for this.
We would like to have a discussion on computational reproducibility and the
FAIR principles in relation to RSE. In particular we would hope flesh out some
stories on challenges/solutions related to computational reproducibility – e.g
experiences from trying to rerun an analysis on a new system or
training/supporting others in reproducible practices. Ultimately being able to
draft some tips/tricks or a checklist of things to consider to address common
pain-points.
Some work on surveying the field is being done in the Research Data
Alliance (RDA), FORCE 11 and ReSA etc. but it would probably be an
interesting discussion to have with a Nordic RSE perspective.
A result of this discussion could be published as a blog post.
Unit tests are fixed sequences of function calls that set up the software to the right state and test the outcome of one or a couple of functions. Unit testing has the advantage that if the functionality of the test is relatively clear, at the expense of generality. It is not feasible to create a diverse set of test cases by unit testing alone; we need higher levels of abstraction.
Model-based testing allows a developer to create a higher-level model of software, which models the functionality of an entire software module. A good test model is capable of generating diverse test cases with different API calls and parameters, while still having a relatively precise test oracle.
We will first present model-based testing and our experience with testing Apache ZooKeeper, where we found an unknown, complex defect. After that, we will give a tutorial where participants can create and modify models of Java collections.
The Nordic RSE group is new. This is a great time to take part in the
active development of the organization by making sure your voice is heard.
We need your help in deciding what is important for the organization to do
and what kind of an organization we should aim for.
For inspiration, you can check out the websites of other RSE associations:
The purpose of this discussion is to solicit input from prospective members
of the Nordic RSE group.
a) What kind of activities they think would be useful?
Advocating for X (value of good software, ...)
Networking opportunities, workshops
Exchanging skills
Formalizing the profession (create career structure)
b) How the group should be organized?
Registered association
Professional organization
Something else
Questions / discussion prods:
Do you consider yourself an RSE? Who is an RSE?
Would you join the Nordic RSE organization? Are you a member?
Why would you join as a member?
If you have joined other professional associations, what have they done for you?
Questions and comments
Ice breaker:
Are you an RSE?
RSE interested, with a plan of becoming one, some day, maybe
yes, leading a rse group
yes, working directly with reseachers (no RSE group, temporary employment, also just learned about the term)
yes, though only learned the term recently. In a bioinformatics group doing infrastructure projects
What is an RSE?
research software engineer (duh)
supporting researcher through code/software development (but can be part of a researcher job)
A software engineer in the area of research (akin to a software engineer in the area of finance). Also to say that domain knowledge is important. Probably more focused on software as infrastructure.
What an RSE is not?
Sysadmin/IT support (although there's typically a lot of IT support involved...?)
Can someone make a carreer as an RSE? or is it where you end up when everything else fails?
Hopefully it is a career path. There are (supposedly) RSE positions and openings, but in my experience they are hard to find. It is definitely not where you end up when everything else fails! It is an alternative path to regular academic research (and to software development in industry). It is typically suited for people who want to focus more on the hands-on problem solving rather than "selling" research and applying for grants, also with a more long-term focus on software usage and sustainability (which is hard to focus on when under academic pressure).
For a lot of people the name "Research software engineer" is confusing and they only put emphasis on the word "research", thereby expecting the RSE do be some kind of (lower grade) researcher but with a more technical background than a "true academic researcher", and that bothers me (because in academia researchers are not supposed to be evaluated based on the number of scientific papers they produce). Maybe removing entirely the term "research" and replacing it by something like "Domain specific" or whatever else would make it easier for people to understand and better define their role? Add something to link it to science and perhaps to computing infrastructures (that could change the acronym from RSE to something like "Domain Specific Scientific Software and Computing Infrastructure Engineer" = DSSSCIE or DoSSSCIE or DoS3IE instead?)
a) What kind of activities they think would be useful?
examples from abstract: Advocating for X (value of good software, ...)
Networking opportunities, workshops
Exchanging skills
Formalizing the profession (create career structure)
ideas that came up during discussion:
build an identity on what is RSE, local communities: communication with others, sharing ideas
be more specific about what an RSE is and also what it is not (not a researcher, not a technician, not a handyman, etc.)
-> making it possible for people to label themselves as RSE
Sweden: Reasearch Engineer exists and gets confused with RSE
it should be pushed to make it possible to hire RSE
-> Nordic RSE should push to make it an official title
this has to happen also within universities
but RSE is not yet a field that professors could chose to hire and thats where Nordic RSE could start (big push from a lot of people)
-> association with members could do that
if Nordic-RSE grows to registered 500 members in the Nordics that call themselves RSE there would be more weight behind what we do. One could write letters and influence
Nordic RSE as place for feedback (eg for Norwegian Reasearch Council)
building understanding (people should know the idea) and not fully focus on formal job title (since this may take long). Make groups and PIs aware that a person with a RSE role could solve many common problems faced in research
defining the RSE: how much research? how much software? how much engineer? ->check what UK has done, one definition on nordic RSE website https://nordic-rse.org/#what-is-a-research-software-engineer
knowledge sharing in meetups etc
permanence: RSE often first position to get cut, importance of position has to be highlighted
service / research
jobboard: help finding RSE jobs (that are not mentioning RSE specifically), currently done on coderefinery chat -> better to have on webpage
acknowledgement for RSE work, writing documentation could be as important as writing a paper, attempts happening in international RSE
As a new RSE, I want to ask if acknowledgements is a metric that is typically tracked and used throughout an RSE career? How important is it (apart from being fair in acknowledging work done)?
Hackathon type of event to get advice on publishing code and related topics
joining forces to organize (online) workshops similar to coderefinery (so that community can link to it) -> setting up a list of these on nordic RSE website that we can recommend
use nordic-rse.org to build up list of recommended training material and resources.
networking: https://coderefinery.zulipchat.com/
b) How the group should be organized?
ideas from discussion:
Should we have official association?
necessary to receive money
for eg workshops (continuation of Coderefinery, software carpentry)
could also be done via some university
will probably need to be done at some point
How could NordicRSE association be useful for us:
how to attract also researchers doing RSE in addition to RSEs
-> highlight that we also want researchers doing RSE work to join NordicRSE
acknowledgement also for their RSE work (even if they do not identify as RSE (yet))
-> continue to teach how code can be made citable
-> how else to help with this?
-> public guidelines on how to cite research software -> community shows how the ideal should be
-> list journals which take software publications on Nordic RSE website (similar to UK RSE)
advocate for that software should be cited
create career-path or make more public that there is one; if there is no path, people are maybe not interested
adding open source research software as merits for universities (making it more likely they will spend money on hiring RSEs)
GROMACS is a free, open-source molecular dynamics community code mainly
designed for simulations of proteins, lipids, and nucleic acids. It is one of
the fastest and most popular scientific software packages available, and can
run on central processing units (CPUs) and graphics processing units (GPUs). In
this session, Mark Abraham (former development manager of GROMACS) will
illustrate software development practices that helped build the GROMACS
developer community. Mark will be happy to take any questions you might have,
e.g. on how to apply similar ideas to the software projects you are working on.
Questions and comments
You moved from specific tools (gerrit, bugzilla) to gitlab. Do you think the integrated solution is better then specific tools?
Specialized tools can lack integration with each other, integrated tools work as a whole
How often do you need to deal with support requests from your community that are related to somehow having GROMACS installed or compiled incorrectly (and is there stuff you do to avoid problems like that from happening)?
can avoid some support questions on installation through continuous integration
Regarding unit tests: How do you find the sweet spot between creating too few unit tests and trying to come up with every single input combination so all bases are covered and your software is fool-proof?
No perfect solution, need to choose a balance. Physics constraints provide useful general tests.
did you do specific outreach activities to reach new contributors
pre-covid there were some developer workshops by external people. Some have later joined the dev community
The philosophy of HTCondor is to allow researchers to easily automate and scale their workflows for greater overall throughput with minimal changes to the analysis code itself. The objective is to run jobs as efficiently as possible wherever there are available resources. CHTC's HTCondor software suite provides not just the batch system but a toolset that includes workflow pipeline automation, performance evaluation, and containerized environments. This demo will cover:
Running a cluster within a Docker container on Windows
Using the Python API to construct and submit a multi-layer workflow
Parsing log-files for performance information
Prior knowledge
Some experience with a cluster batch scheduling system
Below you will find a proposal for the discussion "What academic RSE could learn from startups?" on the 1st of December, 14:00.
If you feel that you share this frustration about research software, and you would like to join the discussion session, feel free to comment on the proposal, and let us all know in advance what your experience is.
The academic world strives to perform the best research possible. The research that was done thirty years ago created a foundation for modern-day computational methods in many areas. But today many academic areas suffer a reproducibility crisis. Letters and papers are regularly published in high-impact journals about reproducibility crisis,... and nothing changes. Poor scientific software is considered one of the major causes of the crisis.
From a startup perspective, academic environments often look outdated and generally wrong. CI/CD, shared codebase, code review, Agile, and orientation to the product are seen as necessary to just survive in the startup world. At the same time, these concepts are completely unheard of or even opposed in most of the non-CS academic places. Why is it so and what can we do about it? Do we really want reproducible research, or do we only want to grumble about it?
The discussion will:
Start with discussing the experiences of the participants,
Analyze a trade-off between the benefits and the costs of reproducibility, and how it affects research,
Compare the benefits of teamwork with the academic "single researcher" mentality, and check how it affects RSE's outcome,
Discuss infrastructure and management problems,
Summarize potential solutions.
We look to meet everyone, who feels they have the same problem in their area of research.
Questions and comments
Stats: 1/2 are "senior" staff, 1/4 are PhD fellows, 1/4 are others
Problems:
Individual publication pressure
Publications are KPI
"Software won't give you a PhD"
Individual work is expected
This leads to people using their limited time towards personal research rather than developing tools and collaborating
"Cultural inertia" among peers and leadership doesn't help
No clear future career and role model
No good role models, no understanding of how to transition from MSc/PhD to an "RSE"
No clear expectations how much freedom to do research an RSE should have - is RSE a researcher or employee?
No resources and training
There is not enough knowledge resources and training
And different backgrounds need different training
Solutions needed:
Promotion of team work (both RSE + "scientists" for more papers and RSE + RSE for day-to-day working and learning)
"In industry you may go to other people who would complement your skills"
Adoption and enforcement of industry's technical solutions for co-developing (VCS, etc.) to enable the co-developing itself
Technical debt is addressed in product startups because the quality of thier product matters - doesn't quality of research matter too?
Allocation of time for teaching and knowledge transfer
Remember "bus factor" - how many RSEs need to leave the group for its research to fall apart?
But no one makes these solutions!
Actionable steps - what we could do as the RSE society?
Public advocacy campaing towards funders - they should fund RSE projects and put pressure on leadership!
Advocacy campaing towards leadership - they will benefit the most because good RSE practices are beneficial in a long run, over 2-4 years
Comment: In Sweden within the UPPMAX hpc facility (I dont know how it is handled at other centers within the SNIC org) and NBIS organisations Application experts and RSE (devlopers) are now mainly hired fulltime and not temporary while there are still people hired with shorter contracts as exception. It has been recognized that its very hard to retain talent if you only hire shortterm/temporary contracts.
Comment: really liked the point about having and getting time for "pet project". not only to have something to show later on CV but also to stay current and motivated and to learn "for free".+5
I started one early this year and told my boss about it a few months later. He enjoyed the initiative, then told me a heartwarming story about investing in projects that just might become a big thing one day. Plus, I also think that this sort of initiative can bring in ideas for an RSE to apply for funding, which is always welcome.
yes! a number of established projects started as side projects and these can often open up funding opportunities and cross-discipline collaborations.
There is a good model at my work: 70+20+10. spend 70% on regular work, 20% learning something new that will make your work better, 10% learning something new that is good for you but does not have to be directly work/project related.+1
Comment: perhaps another way to make scientific software as "important" as research papers is to always try to publish it in a popular repository (CRAN, PyPI, etc.). Not sure how applicable that would be for software that are not R or Python packages, though.
yes, also connecting to yesterday's talk. this can really help using standard practices. publishing packages always felt/sounded difficult/scary until I tried it.
Comment : Excellent point Patric about the standards to which we need to hold resarch software to be able to keep it at the same level as experimental setups and methodologies +3
May we get more details about such standards so we can register them on this document?
Recognizing methodology development: I'm thinking for example of my colleagues building or working on experimental setups and detectors (e.g. at CERN, GSI and other institutes). They spend the majority of their PhD/Postdoc building setups and developing methodology, similar to how others might spend a lot of their time developing and building software. They have several journals where they can publish their work and get recognized for their contribution, which they base their PhD thesis/project upon. There are of course many journals where software and research software can be published, but at least in my experience, I could not build my PhD thesis or project on this work, since there is a complete focus on the research. The attitude is generally that code is just methodology, "it should just work" and it is not important how it works. I think that this viewpoint is wrong. If you do significant contributions, maybe even breakthroughs, this should be recognized and valued.
Testing: Nobody would trust experimental equipment that has not been calibrated or tested. Building an experimental setup, it is therefore assumed that you will have to spend time on calibrating and testing. Sadly, the same cannot in general be said about research codes (again, this is in my experience, but I've heard others commenting the same in this conference). As an example, I took over a large code that was lacking testing and proper documentation. As I started adding unit tests, regression/physics tests, the senior PI was starting to get impatient because they wanted to see research. Even though several important bugs had been discovered that put the validity of results in jeopardy, I was told "enough tests already". An issue is that PIs might be more focused on research and know very little about writing sustainable codes; they just want them to work. I have a hard time believing that the same would happen in the building of an experimental setup (sure, there are horror stories from experiments, but this is far from the norm).
Reproducibility: Reproducibility is another issue. It is often completely acceptable to neglect mentioning implementation/code etc in methods sections, as long as you describe e.g. the physics theory you are using. Even if the code is mentioned, it is rarely made open access, and when it is made open access, it is often hard to use it to reproduce the results. This goes against the core tenet that research should be reproducible. There needs to be more acceptance for the time it takes to make a code re-usable. Of course, everyone cannot re-build a detector for the Large Hadron Collider at home, but at least a great deal of time and effort in spent in describing how said detector works and can be built. +1
TL;DR: So these are some of the main points (recognition of code/methodology development, testing, reproducibility) when I make the comparison between experiments and "numerical experiments", and when I say that they should be held to the same standards. If we make this analogy, it might be easier to promote RSE-related work as not just important but crucial, and to have a greater acceptance of time spent doing these tasks for e.g. PhDs and postdocs.+1
Excellent point to compare software development with experimental method development. How can something lead to good data if bugs are in the code and nobody verifies them? Reproducibility is also the key to prevent wrong conclusions. I like to mention PyFAI. It is an anlysis tool, it is published on github, and is developed by a core team, but users, scientist can contribute to its development. It was published also in traditional publications.
Comment: great point about mentoring groups and connecting to others. +1
Python and R are two major programming languages used for research software development and data analysis in bioinformatics. It is not a symbiotic relationship, but a cold war between the fans of both.
Different tools are available to use R in python and vice versa, but they demand learning both languages. This is not easy, and thus rarely adopted.
This talk will pitch an idea of using AST to build a transpiler between two languages and showcase a simple demo of converting code written in one language directly into another.
The talk will present:
the R vs python problem and its consequences in bioinformatics,
the idea of a transpiler,
some examples of existing transpilers,
a demo of R <-> python PoC transpiler.
Questions and comments
comment: great point about tools possibly dividing communities
Q: optimal R programming would likely involve making use of its vectorization capabilities, for example by using apply functions instead of loops. Is this something that rtopython-mapping plans to implement?
numpy provides these opportunities, so it is possible, hopefully feasible :-)
Q: Why the comments cannot be translated? Just currious.
this can be done, the PoC uses ASTs (abstract syntax trees) for the conversion, and comments are not a part of AST; but this could have been added on top
Q: how it works under the hood?
AST converter from R to python for the grammar + huge map, implementing the functions from R as python AST structure
Ericsson has a long history in the telecommunications industry dating back more
than a hundred years, but with traditional network infrastructure becoming
increasingly virtualized and software defined, and the rise of the cloud, both
the way we work and the skills we require is rapidly changing. This change is
being rapidly driven by the need to collaboratively develop code that can be
meaningfully shared with stakeholders.
This talk illustrates the inspiration that Ericsson Research Cloud Systems and
Platforms (CSP) is taking from the Research Software Engineering community and
provides an introduction to one concept for a distributed application runtime
that we are working on.
There are already long-standing software stacks to do distributing computing. How are you interacting with these existing projects? I do hate to continual pound this drum but HTCondor written by the folks in Wisconsin have been doing this sort of platform agnostic distributed computing for a long time now.
Any plans for running on HPC and how that would work?
A: Currently looking for stakeholders write now for research projects. Under discussion and debating whether the software should be open sourced.
In the era of GDPR and sensitive data how are you seeing the ocean of computing working with localisation requirements that these regulations impose?
A: Not working with the infrastructure but specifying locations where stuff should run.
Comment: We have quite currious application at particle accelerators: the guys making "particle re-energiteze" devices for the particle accelerator started using networks/clouds of smart-devices. So instead of well-defined dedicated networks the control system is running over wifi.
Where does Ericsson see potential benefits in building links with the RSE community? Would this, for example, ultimately be as a route to build users of specific codes, would it be as a route to get/share technical expertise and input from the RSE community, etc? Are there other drivers?
This has to be a two-way partnership, giving back. Trying to get people to better communicate both internally as well as across stake-holder groups. The Met-Office showed how training is important for getting researchers up to speed on making production code and the unique requirements this entails. An on-boarding program for getting new hires acclimatized to the standards of the projects is very important. All code should be group-code. Beware of personalized ownership.
Code ownership is a particularly tricky point, both in Sweden and elsewhere.
This is a very interesting topic that is getting more attention recently. Who owns the intellectual property for e.g. the code developed to do research at a university? Universities have a policy that can seem quite aggressive in the way they claim ownership. It was discussed that teachers can fall under a special clause, but that researchers and students do not. I'm not sure if the latter is true, students (from bachelor up to and including postdocs) also have a special clause, and the tools they develop to "conduct their studies" (including thesis work and postdoc work) are their own. At least this seems to be the case at my university. For researchers and other staff, the situation looks different. Please comment if you have any experience or encountered a different policy.
Patents & Papers as measures for success, but are these actually good measures for progress and in particular the value of collaborative work?
11:00 (CET)
Discussions based on the talks (participants can move between breakout rooms)
The code quality in academia has a bad reputation. A global measure of the quality of a computational-oriented research group is typically based on the number of published papers and not a stable and well organized code. The latter is crucial for the further development of the scientific quality of the group. Is it possible to make code quality more important than the number of published papers in academia?
Interesting questions that are closely related to my title:
How to construct a sustainable workflow for groups working where someone has a user and someone a developer perspective? Where goes the line between what type of knowledge is expected from the users and the maintainers?
Who has the responsibility to teach the academic staff about best software practices? Is it the individual doing computational stuff he(r)self?
How can the studying programs at the universities speed up and keep track of the “standard” developments in the business? For instance: in 2020 everybody doing some kind of development should be aware of version control and testing.
code quality vs code volume: Is it worth putting effort in going open source? -> ongoing debate in many places
github: a representation of the university towards the world, should be showcasing the good
we may not be experts but we should be using the tools provided (version control, testing)
-> "you wouldn't trust an uncalibrated thermometer"
pressure form funding agency to produce research results and not code, hard to try to find excuses to make good code an output
code should be a part of research proposal, otherwise there is 'not enough time'
often severity of the problem is not seen by professors
people get by by producing adequate code and get by and get funding, that does not motivate people to do testing etc
professors need to know and understand the problem (often they do not do any coding (anymore) and forget)
later additions to 'bad code' leads to problems which get noticed -> explanation to the 'higher level' how we could now save time with better code from the beginning
Lots of time pressure in research projects where improving and making code reproducible is not focussed on
no courses on how to write research code :( or not many
stuff like software engineering is often one of the first courses to be dropped when money runs out. Often because 'higher levels' do not know about the importance
new field: not much old stuff to build on top, no real need for sharing. But not anymore the case.
Catchin up takes time, no time to reproduce everything
one citable paper for many years of developing the software? -> need to be judged on different scale
Standard research outputs are not the only thing that research is measured by anymore, need to 'jump on the train'
continuous necessity for novelty, all metrics problematic, potential metrics: how many people are using your product? -> if many people use it, it is valuable to maintain and update software, supporting a large community
usage metrics as a way of demonstrating impact -> hard to make funding bodies recognize that
in ok it is now pushed to be recognized, slowly building up now, took several years to build evidence base of usage
importance of being able to read documentation, you need to know where to look and how to do (not everyone can do that)
what about promoting the importance of releasing often? It is a measure of continuous effort, something that writing one-off papers doesn't do, and even small, bug-fixing patches are important IMO. The flip side is it could encourage busy-work, but I still think it's worth it.
all metrics can break -> wide variety of metrics for value is important
some people work in a field where software is not used much by other people, but may be very useful for your colleagues -> citation supports more novelty than quality
no tasks anymore that can be solved by one person. As a researcher, pair up with an RSE to solve a problem, paper together. win-win. can also help your career.
people are rewarded for bad code by keeping their job through being the only person who can actually read and work with their code, no incentive to make code better -> better long term management needed
today no one is indispensible, dont hire people who think they are
make sure multiple people can 'keep the server running', collaborate
-> culture change needed
need for basic education (version control etc) of students, code review
but people do not like to find out / being pointed put as having written bad code, creates high barrier, but one one need to get over
when sharing code, poeple will get used to it, as its part of development
Coderefinery as a good place to send new phds students to learn version control and how collaborative coding works -> no merge without someone else reading the code first
it's all research, you never know which part will become part of your codebase, turns into something big -> hard to go back later, so its important to start early with reviewing, version control etc
So maybe we should have some sense of "continual review" like "continuous integration"
writing test framework takes time but is worth it in the long run
do what you want in your own code,but you will need to 'act like a software engineer' when working with others
compared to how long it takes to make things work, test implementation does not take too much time
courses such as coderefinery are not appealing to some people who think of themselves knowing git* enough for current use, 'bubble workers', so they never learn how it could be also with branches in own code
Research Software Hour is an online
stream/show about scientific computing and research software. It is designed to
provide the skills typically picked up via informal networks: each week, we do
some combination of exploring new tools, analyzing and improving someone's
research code, and discussion.
In this show which we will stream during the Nordic RSE get-together, we will
have a look at the Rust programming language.
This can only be joined via Twitch, not breakout rooms. Join at
https://www.twitch.tv/RSHour. Questions can be asked via a HackMD
link at that page, no account necessary.
We will do a presentation of the EESSI (European Environments for Scientific Software Installations) project including a demo of its current pilot software stack.
In a nutshell, EESSI develops an infrastructure/service which will eventually allow you to use the same scientific software stack on any machine (e.g., Raspberry Pi, laptop, server, cluster, cloud, supercomputer) running on various operating systems (Linux, macOS, Windows) and the software stack is built from sources and can thereby be optimised for the CPU/GPU/interconnect at your machine. Even better you don't even have to install (almost) any software package as the stack will be delivered to you via CernVM-FS a proven solution to distribute software in the WLCG (Worldwide LHC Computing Grid).
The current pilot stack can be easily tested via Singularity, supports ARM, Intel and AMD processors and includes scientific software packages such as GROMACS, OpenFOAM, bioconductor, TensorFlow as well as all their dependencies.
Questions and comments
Question: Is it possible to test the whole stack, please add links?
Yes, see https://eessi.github.io/docs/pilot/
To get help, join the EESSI Slack, see https://www.eessi-hpc.org/join/
Question: Will you also support AMD Rocm and AMD ecosystem overall?
Yes, eventually. Right now there already are optimized installations for AMD Zen2 (Rome).
OpenMPI is included and is installed on top of UCX & libfabric, so should properly support AMD Rocm interconnect, but this is currently untested.
Comment: I like this idea as for us it is important people can use it in their laptops.
Personally I not much loosing time in setting up sw at my laptop but I see for the users it is important
to have an option to install/use it also in their lab. They like it more.
Yes, this could allow people to literally write a job script that just works on the HPC cluster. Same modules, same software.
(and no need to build containers, or copy them over, etc.)
Question: This builds on existing projects so it has some content from the begining.
Thanks to EasyBuild we can easily provide 1000s of installations.
Right now we limit what we provide, so we can focus on solving the problems we're hitting first.
Question: Why European in the name?
Because it started with European sites.
We're already thinking about changing the first E to "Easy" :)
"EESSI is the Easy Environment..."
Question: Question: what are the possibilities to add “own dirty module”, is it like same as e.g. with EasyBuild itself?
You can easily install additional software on top, for example in your home directory on in /tmp, just like you can with any other software stack built with EasyBuild).
Question: Sensitivity of central Stratum-0 component, in terms of resilience?
The CernVM-FS design is very robust. If the Stratum-0 dies, the only impact is that you can't add new software to the repositories.
As long as one Stratum-1 server is still alive, the software remains available (all Stratum-1 servers have a full copy of the provided software).
So it comes down to having enough Stratum-1 servers, spread across the world, in different sites and cloud providers.
W.r.t adding software: we plan to fully automate the workflow of adding software to the EESSI repository, such that adding software comes down to opening a pull request on GitHub. When the PR is approved by a reviewer, the software gets built automatically on all supported CPU architectures, and added to Stratum-0, fully automatically. Ideally we also have (small) test cases to verify that the installations are functional before deploying them.
Question: You mentioned that CernVM-FS only relies on HTTP connections. Shouldn't that be HTTPS for security reasons?
No, switching to HTTPS has no added value in terms of security, we've discussed that with the CernVM-FS developers.
CernVM-FS has built in security checks between server and clients, so HTTPS doesn't provide any additional security (I think, should be checked in CernVM-FS documentation).
How would this work for large jobs across multiple nodes, can a lot of network traffic to pull in the software be avoided?
Yes, you can set up a shared CernVM-FS cache on a shared filesystem.
If there's no internet access on the cluster workernodes, you can use a squid proxy in the cluster network (on a login node for example).
This setup has been tested with the EESSI pilot stack at the Jülich Supercomputing Centre, worked really well!
Comment: Detection of CPU architecture is a very nice feature. This is a big issue with containers where generic binaries are often, which can have a big impact on performance.
Yes, indeed! Containers are also very rigid: what if you want to add additional software?
The EESSI environment is way more dynamic, easy to add software on top of it (without paying for it in terms of performance), etc.
Comment: This would also work really well in heterogenous environments with a mix of old/new CPUs, thanks to the auto-detection mechanism.
Yes, very correct, this is an interesting use case!
Procuring an HPC system - a.k.a. a supercomputer - is a complex and
multifaceted task. Before sending out the Request For Proposals the
procurer needs to quantify requirements along several dimensions and
decide on an acceptable level of risk. Should the tendered system
maximize benefit for existing users and use cases, or should possible
future user communities and emerging HPC workloads be factored in? Do
you prioritize throughput capability or minimizing time to solution
for given workloads? How important are acquisition and running costs
compared to other measures? Are you willing to invest in future
technologies which would require significant refactoring of commonly
used HPC simulation software? Which HPC software should be included in
the benchmarking suite, and how should benchmark results be scored?
This session will start with a walk through of several aspects of an
HPC procurement and will be followed by an open discussion where
participants can share their own experiences. A goal of the discussion
can be to arrive at a set of best practices in HPC procurements.
Are you monitoring your cluster usage? What tools?
Do you regularly run regression tests? Which tools?
How did you select application benchmarks?
How did you design the scoring system?
Was there anything that surprised you during the procurement process?
What was particularly challenging?
Do you think the procurement could have been more successful if you had done anything differently? If so, what?
Should we score the quality of benchmark reports?
Is it worth all the work to use real application benchmarks? Or use only synthetic or kernels--benchmarks?
How should we estimate the "real" power usage of the system?
Do we need to run a job mix to evaluate how different users affect each other?
Is benchmarks a good way to evaluate the "competence" of the vendor?
Q: How did your requirements gathering procedure work, how did you implement risk assesment to requirement process?
Q: Is the benchmark list exhaustive for the application benchmarks? Can you elaborate on why you chose such narrow scope of the benchmarks?
7 benchmarks are probably at the upper end and is approaching painful for vendors.
including more benchmarks leads to overall worse results
one often ends up benchmarking the vendor benchmark teams
is this a good thing because we will need software support later?
benchmark teams (might) not the same group that does support
how much modification of benchmark codes should be allowed?
major refactoring not representative of typical future use, so might want to keep it minimal
performance for individual proxy benchmarks and application benchmarks can point in different directions, but often becomes robust after averaging
to eliminate small/less competent vendors, can require certain minimum placement on top500 or alternatively have requirements on vendor stability or financial aspects
Q: did you have a dynamic rebalancing of scores for phase 1 and phase 2 depending on the bids?
we had minimum performance on each phase
turned out that it paid off for vendors to focus efforts on phase 2
we decided to have phase 2 with accelerators because that's where HPC is heading, even if all users were not in favor
This was an unconference session, a discussion added to the agenda during the workshop.
Questions and comments
Mentioning where containers are used and what are the the alternative and competing tools. Conda was mentioned many times.
A story of advertising Containers at an institution to scientists, in particular by organising a semminar giving overview of containers and brief intro into them, including documentation where and how can be used at the institute. But even such activities has not resulted in a serious adaption from the scientists side. On the other side it found other paplication, e.g. where isolation is needed for dividing conservative stable software for controlling scientific instruments and up-to-data data analysis environment. Mentioned that in early times, containers were presented in way expressing only limitted view and applications of them.
Mentioned that application in the instrument control, requiring isolation, are interesting.
There are different type of containers (docker, singularity, proto???mark) that are appropriate for different situations: service deployment, HPC, isolation etc. A proper one should be choosen depending on the application case.
There are many possible options how the given container technology can be adapted/used. One should think it is a versatile technology.
Other particular cases where containers were used: a) glibc issues, b) dot.net on CentOS
Expressing a thought that if containers (or other tools) are not widely used/adopted in the given community (scientists) it can die out.
View from BioInformatics: Conda was used a lot (Bioconda), nowadys many computing intensive pipelines are packed into Singularity
An "unconference" has events scheduled based on interest of
participants, not decided by organizers. We are leaving this time
open for ad-hoc events proposed by participants. If there is not much
interest, we will move the conclusion forward.
Thanks to all contributors, organizers and keynote speakers
Especially Samantha, Richard, Thor, Jeremy, Radovan, Naoe for technical setup
Highlights from the notes
Great intro to the international RSE movement and to RSEs on Monday, thanks Alys Brett, Richard and Samantha
Good introductions to groups and projects
several expanding RSE groups
Several technical tools and topics
The further we go, the more we can focus on sharing experiences and tools
Great to see interest in these discussions!
We will try to make all slides/contributions findable and accessible
authors: please send us the DOI or the pdf version
Interesting and inspiring panels session.
What the Nordic RSE should do?
Build an identity. Create a network with local hubs.
About local hubs: multiple things need to happen on university level
Build a network, connect RSEs who are currently only connected to researchers in a field
Have a more specific definitions of RSE
Job board (separate from CR chat?)
Give feedback to national and Nordic organisations (funders, for example)
Collection of resources on the website (to hands of scicomp, citation file format …)
It is easy to see the problems. We should implement solutions.
Make good, well designed tools that also professors will use.
Use and advocate for best practices
Ask for feedback
Invite everyone to
Biweekly meetings
Coffee breaks for more freeform chats
15:15 (CET)
Close
Proposal submissions
The abstract submission form will
be kept open until approximately one week before the event but we encourage
you to submit as soon as possible - even if not perfect.
You can also submit an idea for a contribution to our
Proposal incubator
as GitHub issue where we can comment on it and collaboratively develop the
idea. This is also an opportunity to find co-authors.
Examples of types of events that can be proposed:
Lightning talks: (2 minutes)
Lightning talks are presentations that are limited to a maximum of 2 minutes and no more than 2 slides including
any title slide. They allow you to introduce your group or give a high-level overview of a project.
Talks: (20 minutes)
A regular talk provides the opportunity to go into more detail in presenting your work, a technical idea
or an example of using a software tool or library. There is also the opportunity to get feedback through
questions from the audience. The talk it self should be at most 20 minutes and 10 minute will be reserved for
question and discussion after the talk.
Reprohacks (120 minutes with break)
Propose a research paper and try to reproduce the results. There is no better way to learn what
is needed for reproducible research, and what you might be missing, than to pick a paper and try
to recreate it. At the workshop we will work in small groups on individual publications and see how far we get.
Crash Course (60 minutes)
Run a teaching workshop introducing a useful tool or an interesting theoretical topic. This could
be a combination of demonstrations, short talks, discussion, panel sessions and so on.
Discussions or panels (60 minutes)
Propose a topic of conversation to develop an idea or seek experiences and opinions.
The submitter should chair the conversation to keep it productive.
Collaborative blog post (60 minutes)
A more formal discussion that produces a blog post as an end result.
Topics would be pitched in advance.
Other
Do you have something in mind that does not fit easily into these categories? Suggest any other contributions here.