Prior to PerDis, there will be the Ubi Summer School, hosted by the University of Oulu, and an Arctic circle tour.
8:00 |
Departure fron Oulu – snack in the bus |
~11:00 |
Arrival in Rovaniemi |
11:00-13:00 |
Visits to Science Centers Arktikum Museum and Arctic Science Center Science Center Pilke |
13:00-15:00 |
Lunch & Santa Claus Reindeers Lunch at Lapland Restaurant Kotahovi Santa Claus Reindeers |
15:00-17:00 |
Santa Claus Village at Arctic Circle
|
~17:00 |
Departure from Rovaniemi – snack in the bus |
~20:00 |
Return to Oulu |
9:00 |
Registration opens |
Tutorials |
|
9:30-11:00 |
EYEWORK: Designing Interactions with Eye Movements – Tutorial 1 |
11:00-11:30 |
Coffee - served in the lobby |
11:30-13:00 |
Ubicomp in the Wild: Developing and Deploying Pervasive Displays – Tutorial 2 |
13:00-14:30 |
Lunch Break |
14:30-16:00 |
Collaboration and Personal Devices around Interactive Displays – Tutorial 3 |
16:00-16:30 |
Coffee - served in the lobby |
16:30-18:00 |
Next Generation Virtual Reality: Perception meets Engineering – Tutorial 4 |
18:00 – 21:00 |
Welcome Reception |
8:30 |
Registration opens |
9:00 |
Opening of the symposium |
9:20 – 10:20 |
Giulio Jacucci Prof. Giulio Jacucci's research interests include ubiquitous interaction design, social computing, persuasive technologies, interactive public displays, user interfaces in information retrieval and exploratory search, and physiological computing. In these areas his notable contributions include interactive scalable screens with novel techniques and user interfaces, including the first urban outdoor multi-touch installation. For more info see https://www.cs.helsinki.fi/en/people/jacucci |
10:20-10:40 |
Coffee |
10:40 – 12:05 |
Tasks and Studies Don't Disturb Me - Understanding Secondary Tasks on Public Displays – Paper Supporting Efficient Task Switching in a Work Environment with a Pervasive Display – Paper Multimodal Interaction in Process Control Rooms: Are We There Yet? – Paper Intimate Proxemic Zones of Exhibits and their Manipulation using Floor Projection – Paper VisAge: Augmented Reality for Heritage – Video |
12:05-13:20 |
Lunch |
13:20 – 14:40 |
Interaction Techniques The Lay of the Land: Techniques for Displaying Discrete and Continuous Content on a Spherical Display – Paper Investigating Mid-Air Gestures and Handhelds in Motion Tracked Environments – Paper Design implications for interacting with personalized digital public displays through smartphone augmented reality – Paper Exploring 3D Manipulation on large Stereoscopic Displays – Paper |
14:40-15:00 |
Coffee |
15:00 – 16:20 |
Technology Using On-Body Displays for Extending the Output of Wearable Devices – Paper Automatic Projection Positioning based on Surface Suitability – Paper Guided Touch Screen - Enhanced Eyes-Free Interaction – Paper The ASPECTA Toolkit: Affordable Full Coverage Displays – Paper |
16:20-16:40 |
Coffee |
16:40 – 18:00 |
Software Systems The Massive Mobile Multiuser Framework: Enabling Ad-hoc Realtime Interaction on Public Displays with Mobile Devices – Paper Your Browser is the Controller - Advanced Web-Based Smartphone Remote Controls for Public Screens – Paper Synchronized Signage on Multiple Consecutively Situated Public Displays – Paper "A Good Balance of Costs and Benefits" - Convincing a University Administration to Support the Installation of an Interactive Multi-Application Display System on Campus – Paper |
18:00-21:00 |
Poster and Demo Reception |
9:00 – 10:20 |
Media Citizens Breaking out of Filter Bubbles: Urban Screens as Civic Media – Paper The impact of rhetorical devices in text on public displays – Paper Understanding media situatedness and publication practices in place-based digital displays – Paper In the Candle Light - Pervasive Display Concept for Emotional Communication – Paper |
10:20-10:40 |
Coffee |
10:40 – 12:05 |
In the wild Opportunistic Deployments: Challenges and Opportunities of Conducting Public Display Research at an Airport – Paper Memory Displays - Investigating the Effects of Learning in the Periphery – Paper Emergent Practice as a Methodological Lens for Public Displays In-The-Wild – Paper Campus Knights: Situated Pervasive Display as a Window into Pseudo-Immersive Game World – Paper DroneLandArt: Landscape as Organic Pervasive Display – Video |
12:05-13:20 |
Lunch |
13:20 – 14:40 |
Device Ecosystems Replication of Web-based Prevasive Display Applications – Paper Usage Analysis of Cross-Device Web Applications – Paper Surveying Personal Device Ecosystems with Cross-Device Applications in Mind – Paper Screen Arrangements and Interaction Areas for Large Display Work Places – Paper |
14:40-15:00 |
Coffee |
15:00-16:00 |
Prof. Steve LaValle Steve LaValle started working with Oculus VR in September 2012, a few days after their successful Kickstarter campaign, and was the head scientist up until the Facebook acquisition in March 2014. He developed patented, perceptually tuned head tracking methods based on IMUs and computer vision. He also led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration and the design of comfortable user experiences. In addition to his work at Oculus, he is also Professor of Computer Science at the University of Illinois, where he joined in 2001. He has worked in robotics for over 20 years and is known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm of motion planning and his 2006 book, Planning Algorithms. For more info see http://msl.cs.uiuc.edu/~lavalle/ |
16:00-16:30 |
Town hall meeting/closing remarks |
Weather Traveler - Art Installation
Charléne Airaud, Pauliina Heiskanen, Ville-Valtteri Kivilompolo, Joona Laitinen, Reetta Nissinen, Juulia Ruhala, Janine Vohwinkel, Sanni Wallgren, Juho Rantakari, Ashley Colley and Jonna Häkkilä.
In-Situ-DisplayDrone: Facilitating Co-located Interactive Experiences via A Flying Screen
Jürgen Scheible and Markus Funk.
The Massive Mobile Multiuser Framework: Enabling Ad-hoc Realtime Interaction on Public Displays with Mobile Devices
Tim Weißker, Andreas Berst, Johannes Hartmann, and Florian Echtler.
The Audience in the Role of the Conductor: An Interactive Concert Experience
Marco Speicher, Lea Gröber, Julian Haluska, Lena Hegemann, Isabelle Hoffmann, Sven Gehring and Antonio Krüger.
Just One More Thing! Investigating Mobile Follow-up Questions for Opinion Polls on Public Displays
Matthias Baldauf, Wolfgang Reitberger, Florian Güldenpfennig and Callum Parker.
Understanding Movement Variability of Simplistic Gestures Using an Inertial Sensor
Miguel Xochicale, Chris Baber and Mourad Oussalah.
Transitioning from a Research Deployment to a Service
Sarah Clinch, Mateusz Mikusz and Adrian Friday.
Comparing Two Methods to Overcome Interaction Blindness on Public Displays
Guiying Du, Lukas Lohoff, Jakub Krukar and Sergey Mukhametov.
Assessment of an Unobtrusive Persuasive System for Behavior Change in Home Environments
Dominik Weber, Alexandra Voit, Tilman Dingler, Manuela Kallert and Niels Henze.
There is more to come: Anticipating content on interactive public displays through timer animations
Maximilian Müller, Aris Alissandrakis and Nuno Otero.
A human-driven and evanescent screen for personal information presentation
Ismo Alakärppä and Elisa Jaakkola.
DroneLandArt: Landscape as Organic Pervasive Display
Jürgen Scheible and Markus Funk
VisAge: Augmented Reality for Heritage
S. J. Julier, A. Fatah gen Schieck, P. Blume, A. Moutinho, P. Koutsolampros, A. Javornik, A. Rovira, E. Kostopoulou
This year the Symposium on Pervasive Displays will offer four tutorials. The tutorials will take place on June 20 at the Oulu School of Architecture. Attendance is free of charge.
Instructors: Prof. Hans Gellersen, Lancaster University, UK & Dr. Eduardo Velloso, University of Melbourne, Australia
In recent years, we have witnessed a revolution in eye tracking technologies. Eye trackers that used to cost tens of thousands of dollars, requiring awkward head-mounts and convoluted calibration procedures now cost less than a hundred dollars and are simple to set up and easy to use. As technology decreases in size and cost, we envision a world in which eye trackers will ship by default with interactive appliances, similarly to how any phone or laptop comes with an integrated webcam nowadays. This tutorial provides a crash course on how eye tracking works and introduces a wide range of interaction techniques that use the eyes only and that combine the eyes with novel input modalities, such as gestures, touch, game controllers, etc.
Hans Gellersen is Professor of Interactive Systems at Lancaster University. Hans' research interest is in sensors and devices for ubiquitous computing and human-computer interaction. He has worked on systems that blend physical and digital interaction, methods that infer context and human activity, and techniques for spontaneous interaction across devices. In recent work he is focussing on eye movement, and leading research that breaks new ground in how we can use our eyes for interaction pervasively. Hans’ work is published in over 200 articles, and has been recognised with Best Paper Awards in CHI, Pervasive, and TEI amongst others. He is one of the founders of the UbiComp conference series, and an Associate Editor of ACM Transactions Computer-Human Interaction (TOCHI) and the Journal on Personal and Ubiquitous Computing (PUC). He holds a PhD in Computer Science from the University of Karlsruhe, Germany.
Eduardo Velloso is a Research Fellow at the Microsoft Research Centre for Social Natural User Interfaces at the University of Melbourne in Australia. Eduardo holds a PhD in Computer Science from Lancaster University and a BSc in Computer Engineering from the Pontifical Catholic University of Rio de Janeiro. His research aims at creating future social user experiences combining novel input modalities such as gaze, body movement, touch gestures, etc. His latest work has investigated eye-based interaction with smart watches, multimodal combinations of gaze, and eye control of video games. He has designed and conducted multiple courses and workshops, including the EyePlay workshop at CHI Play 2014, the .NET Gadgeteer Workshop at the iCareNet Summer School 2012, at PUC-Rio, and at the Rio de Janeiro State University.
Instructors: Prof. Nigel Davies & Dr. Sarah Clinch, Lancaster University, UK
Fueled by falling display hardware costs and rising demand, digital signage and pervasive displays are becoming ever more ubiquitous. Such displays are now a common feature of many public spaces and serve a range of purposes including signage, entertainment, advertising and information provision. Beyond traditional broadcast media, recent developments in sensing and interaction technologies are enabling entirely new classes of display applications that tailor content to the situation and audience of the display. The time is right for researchers to consider how to create the world’s future pervasive display networks.
This tutorial explores the challenges of designing, developing and deploying pervasive display systems in the wild, introducing both technical issues (systems software, scheduling behaviours, evaluation techniques) and the human/social/ethical issues that arise from the embedding of pervasive displays in real world environments (audience behaviours, stakeholder concerns).
Nigel Davies is a Professor in the School of Computing and Communications at Lancaster University and co-director of Lancaster’s new multidisciplinary Data Science Institute. His research focuses on experimental mobile and ubiquitous systems and his projects include the MOST, GUIDE, e-Campus and PD-NET projects that have been widely reported on in the academic literature and the popular press. Professor Davies has held visiting positions at SICS, Sony's Distributed Systems Lab in San Jose, the Bonn Institute of Technology, ETH Zurich, CMU and most recently Google Research in Mountain View, CA. Nigel is active in the research community and has co-chaired both Ubicomp and MobiSys conferences. He is a former editor-in-chief of IEEE Pervasive Magazine, chair of the steering committee for HotMobile and one of the founders of the ACM PerDis Symposium on Pervasive Displays.
Dr. Sarah Clinch is a post-doctoral researcher at Lancaster University, UK. She completed her PhD (Lancaster) on the appropriation of public displays and has published extensively on the topic of next generation pervasive display networks. She has been a visiting researcher at Carnegie Mellon University working on novel cloudlet systems. Sarah’s research focuses on the development of architectures for pervasive computing and personalisation in ubiquitous computing systems. She currently works on the European FET-Open RECALL project that aims to re-think and re-define the notion of memory augmentation to develop new paradigms for memory augmentation technologies that are technically feasible, desired by users, and beneficial to society. Sarah is an active member of the research community and is currently serving as publicity co-chair for both IEEE Percom and ACM HotMobile.
Instructors: Prof. Giulio Jacucci University of Helsinki, Finland & Petri Savolainen, HIIT, Finland
Large displays in combination with personal devices can offer a variety of opportunities for collaboration for example in stand presentation at exhibitions, as public game platforms, or for meetings, and collective exploration of information. Designing applications for such situations requires considering collaboration practice and sought outcome, walk-up-and-use readiness, interaction design considering available interaction techniques. The topics covered in this tutorial include collaborative activities and opportunities around large displays, walk-up-and-use connection of multiple devices to the web and interaction techniques across screens and devices.
Giulio Jacucci is Professor of Computer Science at the University of Helsinki and director of the Network Society Programme at the Helsinki Institute for Information Technology (HIIT). He has been Professor at the Aalto University, Department of Design 2009-2010 and is co-author of “Design Things” by MIT press. His research field and competencies are in human-computer interaction including: mobile social computing, multimodal and implicit interaction, haptics and tangible computing, mixed reality, and persuasive technologies. He has chaired ACM ITS in 2013, has served as chair for program NordiCHI, full papers AVI, CHI Design subcommittee. Prof Jacucci has coordinated the european project BeAware FP7 ICT that created award winning EnergyLife featured in Euronews, a playful and pervasive application to empower families in saving energy. He currently coordinates MindSee on “Symbiotic Mind Computer Interaction for Information Seeking” He founded an international Workshop series on Symbiotic Interaction, which he chaired in 2014 in Helsinki. Recently he contributed to invent Interactive Intent Modelling a new interaction paradigm for information discovery published in Communication of the ACM, ACM CIKM and other publications and commercialised in a start up etsimo.com where he serves as a chairman of teh board. He is also co-founder and member of the board of directors of MultiTaction.com MultiTouch Ltd. the leading developer of interactive display systems, based on proprietary software and hardware designs.
Petri Savolainen is a researcher at Helsinki Institute for Information Technology (HIIT). He is one of the inventors and lead developers of Spaceify, an edge computing ecosystem for smart spaces that fuses smart spaces together with the Web. He is also a co-founder, and CEO of Spaceify Oy, a newly-founded startup company that aims at commercializing the Spaceify ecosystem. He is currently working in the Street Smart Retail high impact initiative project of EIT Digital, developing Spaceify Games, a zero-configuration big-screen gaming platform, where the mobile web browser acts as the game controller.
Instructors: Prof. Steve LaValle & Dr. Anna Yershova, UIUC, USA
Virtual reality (VR) is a powerful technology that promises to change our lives unlike any other. By artificially stimulating our senses, our bodies become tricked into accepting another version of reality. VR is like a waking dream that could take place in a magical cartoon-like world, or could transport us to another part of the Earth or universe. It is the next step along a path that includes many familiar media, from paintings to movies to video games. We can even socialize with people inside of new worlds, either of which could be real or artificial. One of the greatest challenges is that we as developers become part of the system we are developing, making extremely challenging to objectively evaluate VR systems. Human perception and engineering become intertwined in a complicated and fascinating way. This tutorial provides an overview of the fundamentals of virtual reality systems and discusses developer recommendations and technological issues.
Steve LaValle started working with Oculus VR in September 2012, a few days after their successful Kickstarter campaign, and was the head scientist up until the Facebook acquisition in March 2014. He developed patented, perceptually tuned head tracking methods based on IMUs and computer vision. He also led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration and the design of comfortable user experiences. In addition to his work at Oculus, he is also Professor of Computer Science at the University of Illinois, where he joined in 2001. He has worked in robotics for over 20 years and is known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm of motion planning and his 2006 book, Planning Algorithms.
Anna Yershova also started at Oculus in September 2012 and became a Research Scientist there until 2014. She made fundamental contributions to the head tracking methods and core mathematical software used in the Oculus Rift and Samsung Gear VR. Since 2011, she has been a Lecturer in the Department of Computer Science at the University of Illinois, where she teaches virtual reality, C++, and data structures. From 2009 to 2011, she was a post-doctoral researcher at Duke University, working on computational geometry. In 2009, she completed a PhD in Computer Science from the University of Illinois. She has published over 20 research articles in the areas of robotics, applied mathematics, computational biology, and virtual reality. She has also co-authored math textbooks that have sold millions of copies and are used in schools throughout Russia and Ukraine.
You can now register here!
Program is available!
Submission dates changed!
You can make your submissions here!