Alexander Refsum Jensenius

Alexander Refsum Jensenius

Professor of music technology, Director fourMs Lab, and Deputy Director RITMO, University of Oslo

Location University of Oslo, Norway

Activity

  • Yes, please send us feedback on what worked (or didn't work) for you. We love to get feedback to improve future runs of the course.

  • We typically use standstill as a baseline. It is possible to consider other baselines, e.g. walking or some other repetitive activity. In our studies, we also screen for medical conditions, etc. In our lab, we mainly work with healthy adults. We are currently working on various sonification approaches to help people listen to their own body motion.

  • Yes, I guess it can, although you would need to know what you are looking for. So the visualization is as powerful as the eye using them. :)

  • I don't know of people using it on historical material, but there is nothing that prevents it. I would be curious to see the results!

  • As long as you have a relatively high sampling rate (100 Hz or more), there shouldn't be any problems with the calibration. We typically suggest to use the same sampling rate for calibration as for the capture.

  • Sorry for the late reply, but we don't moderate the discussion after the initial course run is done. To answer your questions: we don't offer any automatic ways of overlaying the different outputs. But they are standard images/videos/data files that you can use in different ways with your other software.

  • Great question! I have only seen some news stories about ABBA's performances. Has anyone seen them "live"?

  • I guess the question here is how you define a musical instrument! My (long) answer can be found in my upcoming book: https://mitpress.mit.edu/books/sound-actions

  • Yes, the Reactable is great. I didn't know That One Guy, thanks for sharing!

  • Ah, yes, these are good questions! In fact, so good that I have written a book about them (and others). It will be out soon: https://mitpress.mit.edu/books/sound-actions

  • Yes, indeed. You can check out some of our latest development when it comes to "air guitar" performance here: https://www.youtube.com/watch?v=-_wgBZY2iF8

  • Ah, yes, I think there are many different types of metaphors that can be used. You may find it interesting to have a look at the introduction of Tejaswinee Kelkar's dissertation: https://www.duo.uio.no/handle/10852/71043 in which she writes about different approaches to our understanding of pitch and melody.

  • Great to have you with us!

  • Yes, animal sound (and music) is an interesting topic. For some more on this topic, check out this book by Henkjan Honing: https://mitpress.mit.edu/books/evolving-animal-orchestra

  • I know of several studies of various types of musical features in Indian classical music, but none focused on groove. Would be interesting to hear if someone has any examples!

  • Nice example, thanks for sharing.

  • Yes, those terms are somewhat tricky to differentiate. Try re-reading the relevant sections! :)

  • Yes, I agree!

  • Great that you enjoyed the course. Good luck with your data collection!

  • Yes, good point. For videos, we have often generated motion videos, such as this one: https://www.youtube.com/watch?v=aQ7tjztMI2M. This makes it possible to get a sense of the motion even though the dancer's identity is covered. New tools such as deface (https://github.com/ORB-HD/deface) allow for blurring faces automagically.

  • There are certainly many copyright issues to deal with when working with musical material: composers, lyricists, performers, producers, etc. This makes it very difficult to meet open research requirements. I think it will be important to make clearer license models (also beyond creative commons licenses) and systems for handling copyright issues for researchers.

  • Yes, I agree. We often think that musicians have particular *sonic* styles, but these are also linked to *movement* styles. Would be interesting to do some more systematic analysis of this!

  • Yes, very good point. It is difficult to control for this and find reliable ways of reporting/measuring the use of alcohol/drugs. I am not aware of any mocap studies where this has been controlled for systematically, but perhaps someone else has any pointers?

  • Cool study!

  • Welcome! Great question about what influences our actions, but not easy to answer.

  • Ah, only fixed the link in the text. Have fixed the other link too now. Thanks for letting us know.

  • Thanks, have fixed the URL now.

  • The MYO is/was a brand name, but obviously inspired by electromyography. And, yes, muscle sensing is used in many different domains these days!

  • That is a good point. We are doing so many different types of studies, so it is difficult to generalize here. But for hypothesis-driven studies, we have started doing pre-registrations: https://osf.io/prereg/

  • Yes, I think there is a lot of overlaps. All the new DMP templates differ slightly, but they try to formalize more or less the same legal requirements.

  • Have fixed the link now, thanks for notifying us!

  • Yes. When doing the post-processing it often helps to watch the trajectories as they unfold on screen and use this to identify the ones to connect. But sometimes it is very hard! Then is may help to look at asymmetric markers.

  • Good question! I cannot think of any mocap studies of polyrhythms or multiple rhythms right away, but perhaps someone else knows any?

  • Human micromotion when standing still is around 5-10 mm/s, while the mocap systems we work with can reliably capture around 1 mm/s (dependent on setup and calibration). We have done some measurements of different systems here: http://urn.nb.no/URN:NBN:no-31295

  • We have tried with several different types of music over the years, but have tried to avoid singing. The main reason for that is to focus on musical features and not get into lyrics (which is super-interesting but also much more complicated).

  • You mean for capturing skeleton-like images? If you can program, something like OpenPose may be a starting point: https://github.com/CMU-Perceptual-Computing-Lab/openpose

    If you are more interested in general motion visualization, take a look at some of our software: https://www.uio.no/ritmo/english/research/labs/fourms/software/musicalgesturestoolbox/

  • From our perspective of "embodied musicology", bodily and pleasurable experiences are at the core of musical activities. Not all scholars agree that this is important, though...

  • Good question! Some musicians have a very clear understanding of the music theory behind what they are playing. Others don't but they can still play it.

  • Yes, it is tricky at first!

  • Thanks for notifying about the broken link. Have updated the Franz Ferdinand video now.

  • Ah, very nice comparison!

  • Yes, the concept of affordance is tricky, particularly because it has been used in many different fields over the last years.

  • Have you tried dancing alone at home? Perhaps barefoot and with the lights off? Does it feel different?

  • Good question. The Xsens suit is a motion capture device. It works by combining data from the built-in accelerometer, gyroscope, and magnetometer in each sensor unit and fusing them together to track the whole body's motion. This information can be recorded and stored. But since the data is sent in realtime it can also be used for interactive applications. So...

  • In addition to several of Laura's papers, there is also an interesting one by Birgitta Burger on synchronization of eye tracking and motion capture: https://bop.unibe.ch/JEMR/article/view/3983

    We are currently working on another online course called Pupillometry which will introduce the methodology in more detail. It will be produced this spring and...

  • Many good questions. Trying to answer one by one:

    - In the lab we usually work on-site in optimising the setup. But when planning something elsewhere, like MusicLab Copenhagen (https://www.uio.no/ritmo/english/news-and-events/events/musiclab/2021/dsq/index.html), Kayla made a detailed sketch based on drawings of the space.

    - We have a base setup that...

  • Yes, these things happen. We have tried different things over the years, cameras on the walls or in the ceiling, on tripods or on a truss rig. They all have their pros and cons. We typically (re)calibrate quite often to be sure that various types of disturbances don't ruin the recordings.

  • Yes, video analysis may be less relevant in walking, but it can be used! Some years ago, I strapped an action camera on my chest and went for a walk. Running the video through the VideoAnalysis software made it possible to see the walk pattern even though the scenery changed!

  • Yes, stereo-video and depth-video are interesting methods that help adding that third dimension!

  • Yes, I guess it at least has an accelerometer. Probably similar to this bike "helmet" that is becoming popular in Scandinavia: https://hovding.com/

  • Yes, the motion energy analysis looks quite similar to what we call a "motion image". It is a standard technique with many names...

  • Thanks for sharing. We often like to code our own processing pipelines to know what is going on. There are many tools available these days for modifying video files through coding, from more general (Matlab, Python with toolboxes) to more specific (Processing, Jitter, etc.).

  • I am not sure I understand the question, but let me try to clarify a few things.

    - "Plug-in-gait" is a model introduced by Vicon and is part of a package they have for labelling. At RITMO, we often use the label names from the "Plug-in-gait" model also when we work with non-Vicon systems.

    - Gap-filling is usually done automatically by the software and...

  • Yes, there are many exciting possibilities. We have, for example, implemented OpenPose in the latest version of the Musical Gestures Toolbox for Python: https://github.com/fourMs/MGT-python.

  • Ah, interesting example. Then it is also a question about who picked these examples: whose stereotypes are we listening to, and how do the examples fit with people from within the cultures?

  • Great example!

  • Yes, indeed, @CristianeSOUZA. There are many interesting combinations here!

  • When recording multiple people, it is also a trick to use some different marker constellations on each of them. This simplifies identifying who is who afterwards.

  • In fact, affordance is used in a lot of different contexts these days, from theoretical research to practical design.

  • Ah, a wonderful example of what I would call "sound actions" these days. Thanks for sharing!

  • Well, the other way around. You can think of *quantiative* as dealing with numbers, and *qualitative* based on text.

  • Yes, indeed! We'll get to that later in the course.

  • Yes, exactly, the double meaning here is interesting. The question is also whether these two interpretations are, in fact, connected. We'll get to that later in the course!