I had written down my ideas for PG05 before my idea completely change. I thought it would be useful to share the ideas here, as I plan on developing these concepts further in the future.
I visited the British Library and was looking at books that were relevant to multitasking. I found a book called Time Use by William Michealson that looks into the way in which we approach time through peoples uses of time.
This got me thinking about looking at the time element of multitasking, to be able to complete multiple projects at once or something on those lines.
I have come up with two ideas, that I want to develop or at least illustrate through conceptual videos or interactions. In my research I came up with the first idea:
1) To have a time management program that lists the tasks that a user wants to accomplish. This would create a hierchy of tasks, no matter if they are work or play, the program would be designed to break down multitasking and record how much time a user would spend on each task. The program would feature an alert tool designed to indicate procrastination or lack of attention to the highest priority task. This in turn would be useful to go with research that states that multitasking is bad practice, and those who use multitasking instead of one job at a time will perform poorly then those who don’t . This then gives the participant/user the freedom to accomplish tasks and keep a track of time keeping. The idea is that this piece of software would be connected to all running program and documents that are either labeled or running.
A prioritiser is designed for a higher level of achievement to be made and also an overseer and indicator of the time aspect of our day to day lives. The unique part of the software would have to be its fluidity into our digital lives and not be to bulky or crude on a users device.
The downside to this may be the motivation of the user, if they go against the program from the start, then the whole piece of technology will be made redundant.
2)The second idea, a dynamically interactive interface that works by gestures. The idea would be to develop different gestures that a user would use in order to triggered particular program or working environments. Some primary research in which I obtained was monitoring an individual using the digital for multiple tasks. The three I broke down from him were 1)Working 2)Gaming 3) Leisure. What I noticed was that I could identify after a while what the subject was doing without looking at the content on screen. I could see from his body language and eye contact what task his was working on. By looking at the way in which his hands were fixed to devices or not, gave me insight to what he was doing. My idea here would be that these positions could be triggered by object recognition through a 3D camera (kinect) to then triggered particular digital applications. For example, one gesture could be for work. This could then turn music down or switch it off, open up word processing software, and open up relevant research or working websites that the user could set up. This would then save time ultimately due to what the user is doing and also provide suitable working areas on a computer. In some ways it would be like the ‘spaces’ tool that Apple computers use, where certain applications can be open in different work areas. The difference here would be the categorising manner of the software.