Monday, 26 April 2010

IDAT210 - Project Evaluation

My project the ‘Sound of Social Networking’ has been an interesting exploration into generative sound and communication. The project was originally aimed at meeting five keywords as follows:


• Generative Art

• Digital Performance

• Dadaist poetry

• Social Networking

• Sound/Music Production



In terms of my project and generative art I feel there is a close link as the audio and visuals are generated from the text provided by Facebook users. This therefore also makes the process part of my project quite strong. My project can also be considered a digital performance, as it is live updating and so a performance, and the means by which it is created is all digitally based, particularly in terms of the generative element. I have also managed to meet the social networking and sound production keywords for my project as these are at the centre of how my project works. The link with Dadaist Poetry however is not as strong. The process within my project could be considered part of this, but there are not many instructions or randomised elements to project. The final outcome of my project; the audio in particular, has a Dadaist quality about it due to its rather abstract nature, but this does not necessarily make it Dadaist.

There are a number of elements to my project which could be improved in the future. Firstly the audio produced is very monotonic, notes being the same length and volume as each other. However there is other information that can be gathered by the RSS and so this could be used to improve the musical nature of the project. There is also the element of access, both to data and to the project application itself. The application should be able to access any data regardless of the RSS format so that it can be used for multiple feeds rather than just the one that I have developed. This way it would be more functional and open for interpretation than it is at the moment. It would also be good if the application was adjoined to its data source online so that contributors can see how what they write effects the audio produced.

In terms of meeting the original process of communication that I originally had planned I feel that this project fell a little short. Conversation was not developed or emulated that well in my project, but instead other developments were discovered. My project has the ability to represent instantly the most used length of word at any given time, and displays communication musically, in the form of a type of written score. It also demonstrated an interesting openness of thought and how this can lead to themes, such as emotion in the textual content gained.

Overall I am happy with the project that I produced and have, through development, discovered a number of interesting features and possibilities of my project that I did not originally plan for, but that in my opinion make my project a success.

IDAT210 - Development Possibility: Webpage Music

After creating my finished piece for my project it occurred to me that there is another step that the project could take which would re-align its general concept but would also be a good and interesting development. This idea is centered around the idea of 'webpage music'. Due to the way that my project works it is easy to use the same system for any RSS feed url, and so it would be easy enough to make the project dynamic so that users could provide a url which is then turned into audio.  In this way music could be created for any RSS feed that the user provides.  This generalisation to any RSS feed means that the project would be more focused on the concept of any web feed being turned into sound and so the project could be classed as a webpage music generator, developing the sound of the internet, rather than simply a social networking page.
Being that this development should not be too difficult I decided I would try to create this version of my project as shown below:
However upon creating this I did come across a problem with this set up in that the system I was using was looking for the titles of posts rather than the content.  This is ok when using the Facebook posts as title and contents of the posts are the same, but in most RSS/ATOM feeds they are different, such as with these blog posts.  Therefore when using the system this way users would only get a musical representation of the titles of elements, and not the content.  As well as this I also tried the system with a standard RSS feed from Microsoft which did not seem to work as the application is based on using a ATOM format for the data.  This meant that for this version of my project to work I would have to incorporate elements which took both forms into account and which was also able to access the main content without any extraneous html data.  I therefore decided that although this is a good idea for a development in the future, it would be best to keep my application in its current 'sound of social networking' state.

IDAT210 - The Finished Project

#bdat Once I had formatted and completed the production of my project application all that was left was to launch the Facebook page to the public and get them to carry out the process for my project of text creation from which my application runs.  To do this I added a post to the page explaining the limitations of how the page could be used as mentioned in the Posts and Facebook RSS post.  I then invited a number of people to the page and began adding them as admins whenever they joined.  To ensure there were no mistakes in terms of the posts made I locked the wall off from open access so that only those who I had added as admin could write on the wall. This ensured that there are not extraneous posts that wont be taken into account.

By getting the public involved in the page this means my project application is now self perpetuating creating audio from other peoples participation in the project rather than mine alone.  To find out what the sound of social networking is like at the moment download the zip folder from here:
www.veat.eclipse.co.uk/projects/FBsoundproj.zip
Once the file has finished downloading extract the folder from the zip file and run either the exe file or the app file depending on whether you are using a PC or Mac.  Accept the security warnings to run the file and enjoy! If you wish to get involved in the project login to Facebook and search for Uni Project Help group, become a Fan and i'll endeavour to add you as an admin so you can contribute to the wall and see how it changes the sound of social networking.

Sunday, 25 April 2010

IDAT210 - Textbox fix

As mentioned earlier I was not sure how to deal with the text in my application as I was not sure whether it was necessary, and had problems with the amount of text in comparison to the amount of space available. I have now come up with a simple fix to this problem.  Firstly I decided it would be a good idea to keep the text in the app as this gives people a chance to make comparisons between the textual data and the sound and visuals that are produced according to this.  Therefore I had to solve the problem of space for each of the posts made.  I decided the best way to do this is the most obvious way, in that I should add a scroll bar to the text box so that viewers of the application can scroll up and down through the various posts that are being used without having to refer to the Facebook page.  I therefore looked into scrolling text boxes and found that I could use a flash UIScrollBar component for the task.  This then had to be coded in after the text had been created in the AS3 code to ensure that it was relevant to the length of text.  The following is how the application now looks:
Hopefully once I have got the project properly launched on Facebook the amount of text will be much more varied and larger than this and so there will be more use for the scroll bar, as well as a longer and more varied audio and visual piece.

IDAT210 - Public Access

In order to give my Facebook page members some understanding of what they were contributing I was planning to upload the application to the internet somewhere, either as a Facebook app, or as a page on my website which the members could navigate to.  This way after posting, users could see how their contribution was used and how it changed/developed the 'sound of social networking'.  However on beginning to try and implement this I came across one major flaw.  Flash security settings are awkward in that on a local machine a swf needs specific access from the Flash Player to access any address outside the local machine, and that similarly when posted online the swf simply is unable to achieve any remote access to an address other than that of the server it is currently running from.  This meant that to be able to post my project application online so that it would run, I would need to access the remote information somehow else other than through Flash.  After doing some research I found one way to do this is to use a small PHP script to pull in the data which can then be accessed by the swf.  I therefore went ahead and tried to implement this solution only to find that my web space does not support PHP and so I could not upload it to a space online in which it would work. 
This left me with one other option.  In terms of the local device security this is true for swf files but not if the application is exported as an exe or mac app file.  I therefore went ahead and exported the file in these formats which can then be uploaded to my space and downloaded by others who can then run it locally on their computers.  This provides a slightly long winded access for the Facebook page members and others but does allow users to view the project, which therefore meets my ultimate objective, if a little bit awkwardly.

IDAT210 - Posting with Facebook RSS

Once I had created a fully working model of my project the final stage was to check its compatibility with the Facebook page, in particular looking at fan posts and comments.  Therefore to test this I used another account to add posts and comments to the Facebook page that was not an admin of the page, and the ran the application.  Unfortunately upon doing this I found that the posts and comments did not show up in the feed, and so did not show up as part of the application either.  However if I posted items myself (as the admin of the page)the RSS would pick the changes up. In terms of comments however, even as an admin, they would not show up on the RSS or in the application.  This meant there was only one way to get the feed to pick up the necessary data, which was to ensure all users were made as admins, and that they only posted directly onto the wall.  This was in all intensive purposes a major set back for the ease with which I could get data added to the Facebook wall for my application, but did have a work around so I decided to stick with what I had created so far and use the work around to make my project work.  This would mean inviting people to the page and then also telling them to only post onto the wall and making them all admins.  Admittedly this is more information than I ever intended to provide users of the page, but it was necessary for the project to work as a whole.

IDAT210 - Project tweeks

Once the body of the project had been devised and created the next step was to make sure all eventualities were covered in terms of how the project functions.  In particular this covered two likely problems, words longer than 12 characters, and enough posts to run the visualisation over the stage width.  To make sure these possibilities were covered I made two simple solutions.  In terms of the characters issue I had already decided that only 12 notes were going to be included and so either I ignored words over this length, or continued to represent them visually but miss them in the audio.  I therefore decided that the best method was to keep the word in the visuals by representing any word above 12 characters with another circle so that the visuals are as accurate as possible to the data, and so people wont get too confused.  This is shown below:
In terms of the other issue regarding more circles than the stage width I decided to keep with the notation theme and simply continue the following visualisation on a new blank page, as if the page has been turned to the next page of the score. This is shown below:

This will ensure that the viewer of the project will be able to see the entire visualisation of the entire audio track rather than just a part, quickly and easily. 
Another aspect that I decided to add to my project was small breaks in the audio between comments. Originally my audio was one long continuous string of sound.  However being that my project is based around the idea of communication I felt that I should somehow demonstrate this through the audio that is created.  I therefore added in breaks between the comments like rests in a piece of music to emphasise the different comments, and also to give a sense of some kind of communication between the different sections of music, almost as if they are responding to each other.  This mimics the communication had through the comments themselves and is also a standard concept in music composition so continues the link between the project and music.

IDAT210 - Audio Visualisation

Being that I have created my audio in Flash I also felt it was important to create a visualisation for the piece as well, as this would provide the viewer of the project with a better understanding of the audio and of the project as a whole.  I therefore began by thinking about what kind of visual elements I could use.  I decided the best way to visualise the audio would be to have one visual element attached to each note so that when the note plays the element also comes up on screen.  I didnt want the visuals to be too confusing either and so started experimenting with circles and lines to see what kind of generated visuals I could create.  In the end however I decided that the best method was simply to apply different colour circles to each note and then add these to the screen.  By having different colours for each note the circles would be identifiable and also would allow the viewer of the project to get a visual sense of what length words are used most by which colour shows most on screen.
Once these circles had been integrated the next challenge was in how to organise the circles on screen.  I began investigating this by placing the circles randomly on stage as shown below:
However I felt this did not provide the viewer with much understanding of the information due to the random nature of this approach, and prevented the viewer from making any differentiation between each of the comments that are represented.  I therefore then tried a more structured approach putting the visuals into lines, a new line representing a new post.  This is shown below:

These were an improvement on the randomised version as they allowed the viewer to compare the different posts based on their colours and identify which parts of the visualisation apply to which parts of the audio.  However I still felt that more could be gained from the visualisation and it could be better fitted within the context of the project.  I therefore continued to experiment with the visuals until I discovered the version below:

I like this design as it displays the circles in a very music based way.  The format particularly reminds me of guitar tab and how that is laid out, but it is also similar to a standard score.   I therefore think this is a good design for the visualisation as it is like providing a notation of the text as well as an audio for the viewer of the project.  This is why I chose to add the horizontal lines, both to better separate the individual circles and to create a stronger bond between the project and musical notation. To ensure this theme is followed through the lower notes are placed at the bottom of the visualisation and the higher notes higher up just like a standard score.  Also to outline where the different comments start and finish both in the audio and visualisation a vertical line is added, again similar to a standard score, making a strong link between the project and its musical undertones. Being that the different notes are now separated by where they are placed vertically it is questionable as to whether to keep the colours as this is not standard to musical notation.  However I have decided it is better to keep the colours in the visualisation for three reasons. Firstly by keeping the colours the viewer can identify a bigger difference between each of the rows and so can identify which notes apply to which colour more easily.  As well as this it allows the viewer to better judge which notes are used more often, and so, which length of word is most common. Finally the colours also help the aesthetic of the visualisation and so for these reasons I have chosen to keep them within the project.
The only other element of this design that needs to be considered is the text element showing the posts that the audio and visual is referencing.  I have considered including this as it can provide the viewer with something to compare the other elements with to give a better sense of what the project is achieving.  However this will very quick get filled up with posts from people placing comments on the Facebook page and so it may be difficult to see and investigate the text in relation to the audio and visuals.  This therefore needs to be looked at further before my project can be fully completed.

IDAT210 - Audio and words

Once I had the basic parsing working for my project I was able to begin coding in the sound elements that would turn the posts into sound.  The RSS library that I found alread listed out each individual post on the page so my task was to take these and develop the code that would build my project.  The first thing I did was to create the code that would give the length of each word in the post from which the sound would be produced.  By using the space character in a string of text I was able to count how many characters had come before that and so calculate how long each word was.  Once this was in place I had to have some sounds to apply to the word lengths.
In order for the audio produced from the project to be in some way understandable I felt that the best sounds to use would be notes from a piano, each note resembling a different length of word.  I therefore recorded a set of 12 piano notes from F below Middle C to C above Middle C not including any black keys.   I chose not to include the black keys as the audio being generated from the text would mean that there may be a lot of clashes between notes, and with black keys this would be especially recognisable.  Therefore in order to make the generated audio as musical as possible I left the black keys out, making whatever tune is produced in the key of C.  I chose to only record 12 notes in total as I felt that it would be unlikely for most words used to be any longer than this, mainly homing around 4 or 5 characters, especially being that text speak is often used on platforms like Facebook.  Therefore any words included in the text above 12 characters long will not be played in the audio.
Once I had my sound files I was able to begin coding in the link between the word lengths and the notes played.  Each note is played one after the other, but I have chosen not to include any other transforms so as not to make the audio too complicated.

IDAT210 - Facebook, Flash and RSS

The first stage in creating my project is to set up the Facebook location from which the textual data will be retrieved.  The easiest way to get the information from this location is using RSS which will check the page for updates and then update xml code accordingly to store the information.  This may seem like a simple task but it actually took some development to find all the right elements to make this happen.  Originally I planned to make a Facebook group which I could invite people to, which they could then add comments to.  Once I had created the group however, I found that Facebook does not provide an inbuilt RSS feed for groups. I therefore went searching for ways in which it would be possible to get a feed from the page and came across two different tools, Yahoo Pipes, and page2rss.com.
Yahoo Pipes is a small application builder which allows people to create all sorts of programs quickly and easily.  One of the applications that I found here was a tool specifically for making rss feeds from Facebook which took the group id of a Facebook group and then created an rss feed for that page.  I therefore endeavoured to make this work, but found that it did not seem to work very well for my group and did not provide me with the feed I wanted.  I then continued searching for a solution and found page2rss.com.
Page2rss.com is a service similar to that of the Pipes app I found but that takes any page url and turns it into an rss feed which checks for updates.  I therefore put in the url of the Facebook group hoping that this would be ok.  However on adding information ot the group wall I found that the page did not seem to be updating enough, and that when it did it provided me with a lot of extraneous information from page2rss.  Being that these two methods were unsuccessful for Facebook groups I began to consider other areas of Facebook which I could use for my project.  After searching through  different sections of Facebook I discovered that Facebook Pages were an area of Facebook that had inbuilt RSS feeds.  I therefore created a Facebook page for my project with its own RSS feed and got the url for this feed so that the data could be parsed into my Flash application.
The next hurdle to overcome was getting this feed properly into Flash.  To begin with I just tried using a stndard xml parse to get the rss data into th Flash environment as RSS is in the XML format.  However this only seemed to partly work as the important information seemed to be missing from the data and so I began researching into getting RSS into Flash with AS3.  After looking at a number of bits of code online I discovered that in AS2 it seemed that you could import RSS similarly to XML but that AS3 may be more difficult.  I then came across a RSS/ATOM Flash library for parsing RSS to flash called NewsATOM_cs3 from http://lucamezzalira.com/2009/02/07/parsing-rss-10-20-and-atom-feeds-with-actionscript-3-and-flash-cs4/.  This libary is an open source library developed from the as3sydicationlib and made compatible for Flash by Luca Mezzalira, and allows users to parse RSS 1.0, 2.0 and ATOM feeds.  I therefore downloaded it and started using it for the base of my project to allow RSS parsing for the Facebook page.

Saturday, 24 April 2010

IDAT210 - The project

So after rethinking my strategy for my project I decided that the project would be similar to the original idea, but also more meaningful.  The sound that is created through the project should be live updating to demonstrate some kind of performance, and will be generated through comments that people make on a Facebook page.  The people who provide the data will not be told specifically what the project is about, or what to write in the hope that they will decide for themselves and create some kind of conversation out of it.  By allowing this to happen the project will comment on the ways in which social networking can create good communication out of a lack of communication. 
In terms of the audio produced this will be built to provide the 'sound of social networking' based on the length of words written by people.  When posts are added the music will be changed accordingly to ensure the updating element.  This project will be built using Flash and the information will be retrieved using RSS. 

Wednesday, 21 April 2010

Stonehouse - An Evaluation

The Transforming Stonehouse project has been a year long investigation into the area of Stonehouse, its character, its people and how it can be improved.  Ultimately the work that we have carried out over this year can be broken down into three basic sections. These are The Site, The Hertzian Space, and the Final Arduino Project.  Throughout these various section the ideas that my group and I were having regarding the transformation of Stonehouse were investigated and developed, eventually leading us to an implementable project that could potentially transform Stonehouse for the better. Throughout the year our ideas have been continually developed whilst also ensuring that we kept to one particular focus.  At the beginning of the year we spent time looking at Stonehouse in relation to light, which we then developed further during our long exposure investigation into the concept of communication using light.  This then got transferred to our Arduino project which lead us to developing the project we have created.
In terms of our project I am quite happy with what we have managed to achieve with the Arduino.  Although we have been unable to implement it properly as of yet this was to be expected and so achieving a working model as we did was good.  There are however a few elements of the final project that could be improved or developed further.  Firstly the concept of communication through light was good, but I feel our final project strayed slightly from this as the use of text meant that we were communicating regardless of the use of light.  The only part light had to play in our project was the use of the projection and so the involvement of light with communication could have been developed further.  As well as this the information we used for the project could have been better researched in terms of people's opinions and words that they felt would be opposite to the feel of Stonehouse.  The words we used were chosen based on our group's personal opinion of the area and other opinions we had gathered over the year.  Although these opinions seemed to be the general consensus we did not find out the opinions of the general public in the area and so we may have misjudged how people feel about Stonehouse. 
In terms of further development, one element that could be added to our projection could be some kind of interactive feature.  People like to investigate interactive projections much more that non-interactive ones, where parts of the projection interact with their movements, so to get our information across better, and to transform Stonehouse further our projection could be developed to react to people's movements infront of the projection. 
Ultimately our final project gives a clear message and works effectively incorporating Arduino technology and ideas we have developed throughout the year.  I am happy with the project we have created and feel it is a sucessful development for our site in Stonehouse.

Tuesday, 13 April 2010

Stonehouse - The projection finalisation demo

The final stage of our Transforming Stonehouse module was to create a demo of our project idea based on what we had worked on over the year which demonstrated what we would plan to do and how if we were able to properly implement it into our site location.
Our idea for our project was to use arduino connected to a light sensor to provide information as to whether a pedestrian crossing button had been pressed.  When a pedestrian presses the button on a crossing a light appears on the console, with either usually a 'WAIT' or pedestrian symbol. By using a light sensor we could tell when this had been switched on and so when the button had been pressed.  The data from the light sensor would then both control and trigger a projection.  The projection would be run from a Flash app in which the data from the arduino would be sent to the application and then this would be used to modify and control it.  If the data said that the light was on a selection of words would be projected linking to our idea of communication through light.  The number of words shown would be based on the length of time between the light being switched on.  The words shown have been chosen to counter act the current opinions of the Stonehouse area by being more positive, encouraging the viewers to think more positively and so improving how people feel in the Stonehouse area. Below is a video demonstrating how the system works using the arduino with light sensor, a computer and a projector but in a classroom environment as we currently have not be able to implement it due to its requirements:

Transforming Stonehouse Demo from Rebecca Veater on Vimeo.
The plan for the projection would be that only the words would be shown and could be projected either on the nearby wall, or on the floor in front of the pedestrians.  As well as making people feel more positive the system is also designed to prevent pedestrians from crossing the road at incorrect times when the traffic has not actually been stopped.  Often people press pedestrian crossing buttons but the cross before the traffic has been stopped for them, leaving drivers stopped for no reason.  By preventing people from crossing without signal by distracting them with the projection this should help to make the traffic lights and pedestrian crossing more effective and efficient.
To make this working demo we used a open source Arduino - Flash library called Glue.  From this (or from the Arduino software) we were able to download the FirmataStandard arduino script onto the board which allowed us to communicate with the arduino directly from Flash making it easier to connect the data to the projection image.  The projection was then made in Flash using additional AS3 script and a text file which was parsed into the flash file to provide the list of words.  To be able to run the projection we also needed to have the proxy server running that came with the Glue software.  This acted as a server between the arduino and the flash from within the computer, allowing for the data to be transferred and the projection to be controlled.

Stonehouse - Projection Content

The final element that needs to be considered in terms of the finalisation of our project is the content which will be shown through the projection and its relevance to our project.  All the way through the project we have been looking at light and then developed this into the concept of communciation through light.  We therefore felt that this would be the correct route to follow.  There are a range of things that can be communicated, and also a range of ways in which this can be done.  The projection aspect of the project covers the concept of light as a means of communication but the content that the light shows can be in different forms such as text, image, shape, line, colour, number etc.  The data that this is then connected to can also be various such as facts, values, thoughts, opinions, words.  In particular we were interested in making a comment on the opinions of Stonehouse at the current time.  Most people seem to feel that Stonehouse is not a very nice location in that it is dark, dank and run down, as well as being an area for crime.  We therefore felt it would be a good idea to demonstrate the opposite of these opinions to try and brighten the opinions and feelings towards the Stonehouse area.  We decided the way to do this was to display words on our projection opposite to those that might be considered as an opinion of Stonehouse, which viewers would see and read, and so this may give them a more positive outlook to the environment. The number of these words shown will be based on the length of time between a press of the pedestrian button and when the pedestrian light was last lit.  The following list is the positive words that we will use for our project:
  • STONEHOUSE
  • HAPPY
  • NICE
  • BEAUTIFUL
  • CREATIVE
  • IMAGINATION
  • INSPIRATION
  • COLOURFUL
  • BRIGHT
  • LIGHT
  • GREEN
  • NEW
  • CONFIDENT
  • STRONG
  • REGENERATION
  • QUALITY
  • CALM
  • GREAT
  • POWERFUL
  • DESIRABLE
  • LUXURY
  • ATTRACTIVE
  • LIFE
A random selection of these words will be chosen to be shown on the projection, the number of which will be defined based on the time since the last press of the pedestrian button. 

Monday, 12 April 2010

Stonehouse - Our project outline

Once we had decided on the various elements of our project we then had to outline how it was going to work and then create it.  The following is an image showing how our system would be placed in our site area:
There is also the option of choosing to place the projection on the floor in front of any pedestrians at the crossing as this would be more obvious to them and possibly catch their attention more easily although they may find it easier to continue to cross at the wrong time as they would not have to move away from their position at all.  Obviously this set up is quite difficult to implement successfully in our current situation and would take a lot more work to integrate permentantly to the setting.  However the general idea can be made as a mock up to show the basic physicality of the project, including the working functions of the arduino.

Saturday, 27 March 2010

IDAT 204 - AR Evaluation

Overall I am quite happy with the outcome of my augmented reality project.  The project providesthe user with a range of information on the solar system and allows the user to investigate the solar system and its planets using the flexibility provided by augmented reality such as 360 degree rotation.  However there are a few problems with the tool, and ways in which it could be improved.  Firstly the main solar system section was planned to be animated so that the user could watch the solar system in action.  Although the model I made is animated the amount of data the tollkit has to cope with for this seems to be an issue as the animation does not really run.  This does not prevent the user from investigating the solar system, but does mean that it provides less information and so does not quite meet the original specification for the tool.  Another problem with the tool is the size of the solar system section.  In order for it to be usable the system has to be quite small but this makes it difficult to see the various elements of the solar system.  This is a bit of a design flaw and would have to take some more consideration into how to design the tool to make this better. 
Both of these problems could be improved for the future.  There are also other elements that could be improved.  The asteroid belt that is also part of the solar system is missing from my model and so if I were to improve the tool I would probably include this.  I also intended to create better planet rings for Saturn and Uranus in my model using particles, but in the end decided not to as I could not get the right look for this. I could therefore work on this some more if I was to use improve the tool.  As well as this I did not end up recreating the toolkit marker to make it more relevant to my tool so to make my tool a more well rounded product I could also change this.
Apart from these minor changes that could be made however I have managed to design and implement a successful working augmented reality tool for educational uses, and so am happy with what I have produced for this project.

IDAT 204 - AR final product

I have finally managed to complete my Augmented Reality product for an AR Orrery.  For the project I have created a solar system model, and also each individual planet so that learners using the tool could investigate further the separate planets away from the entire solar system.  This also makes learning easier as just using the solar system it would have been quite difficult to investigate all the planets.  For more on this see the evaluation.  The individual planets are accessed through selection buttons on the Flash tool to reduce the amount of markers needed and to make the product a more system based tool.  The textures I used for the planets were open source images downloaded from http://planetpixelemporium.com/planets.html
To have a go at using the AR Orrery please go to http://www.veat.eclipse.co.uk/projects/FlashAR/ARPage.html print off the marker and try it out.  Below is also an example of the AR Orrery working:

Monday, 22 March 2010

IDAT210 - A Context?

Although my project has a basic structure - social networking develops text which becomes sound, this method needs to be set inside some context to make it a viable project.  I am finding this context however quite difficult to define as though I know the process would work it is a bit clunky and almost acts a two separate processes strung together which makes it difficult to come up with one underlying context which fits.  This post is to investigate the elements of my basic idea and see how I can work it around to make a more solid and whole project idea.
Firstly lets look at the two processes that have manifested in my project.  The first of these is the chinese whispers process.  This is the most experimental of the two processes and is also the main supporting element of the project.  This process was influenced by the Dadaist poetry methods of taking pieces of text and remixing them to produce a whole new piece of text.  Rather than doing this physically though my idea was to do this remixing in terms of response in that a person comments on a piece of text and this comment becomes the new piece of text which then gets passed on - alike to chinese whispers.  The next question is why am I doing this? Firstly it is bringing the concepts of Dadaist poetry into the modern era through social networking.  Also it can be looked at in terms of social networking as an investigation into communication and whether a conversation develops or stays the same.  This however does not seem to sit quite right and so maybe I should rethink the process slightly so that the focus becomes more about translating Dadaist poetry techniques into a social networking environment which is reliant on communication. 
The second process is the part in which the final piece of text, all the lines from the chinese whispers put together, gets reproduced as a sound.  This process seems like a bit of an aside in the whole but was influenced by the idea that there has not been any sound related work for social networks, only visual, and that sound can be performed, linking my project back to digital performance.  However it is questionable as to whether a singular piece of sound that would be produced at the end of this project could be considered a performance as it would be quite a static element, even if say a visualisation was put with it.  It therefore may pay to consider how the sound element of my project could be redesigned or re-thought to be more performance based.
So the first step is to redefine my underlying process based on my original idea so that it deals more closely with communication and Dadaist poetry in social networking. Firstly I think I need to decide on the reasoning for my process and in this way the process can be developed from its aim.  For this I need to consider my keywords again:
  • Generative Art
  • Digital Performance
  • Dadaist poetry
  • Social Networking
  • Sound/Music Production
Social networking is a form of communication that is relatively new, and has been critisied for reducing direct communication, even though it could be considered to be enhancing communication in general.  Networking obviously is an important part of social networking as it allows users to network their communications easily and efficiently to many people.  In order to justify using a social network as part of my process I have to base my reasoning on something that social networking provides exclusively such as these, other than just having access to a large amount of people.  The concept of the original process came from the element of comments in social networking such as facebook in which you can see various comments and the tangents of thought that occur from one original post.  It was this openness of thought that I was planning to capture through my process, but in a different way to that achieved in a normal 'comment' situation.  Normal comments are also in context whereas my original process was planning to remove the context of the comment to provoke a greater reaction in the comments made.  However how can social networking comments be seen as different to passing a piece of paper around a room which everyone writes on?  I think the answer to this is a number of things.  Firstly the main element of social networking, distance between people, is an important area to consider along with the instantaneous nature of social networks.  A piece of paper could not be passed from one person in England to another group of people in America without a substantial amount of time being taken in between, if social networking tools are not used.  There is also a longevity to social networking as people can go back to their comments or posts at any time and add something to them.  Similarly people can use this longevity to look over a period of time at the changes say in a persons status.  The openness of the information here is quite important as again, someone could keep a diary which they could look back on, but this would only be available to the individual and anyone else that they showed it to, rather than others choosing whether to look at the information or not.  However it is interesting to note that there is a similarlity between comments on social networking and the same process on paper, as comments cannot be edited or changed, only removed or added.  Overall however it is the unlimited information sharing capabilities of social networking sites that make them so different to other forms of communication as it is very easy to share and recieve thoughts and opinions to and from many people across a wide range of locations very quickly.  With these differences I think it is viable that I can use social networking comments for my project if I wish as it is different to other methods, the question now is precisely how to deal with them to make my project work.
One thing that is obvious in this remodelling of my idea is that the sound produced should be running and updating live off of whatever information it is based on.  This is because in order for it to be a digital performance it needs to be performed over a period of time, rather than just played at any point.  The only problem with this is that I currently have no idea of how to set up a system which changes and plays the sound live off of the data.  In my idea creation though I feel my knowledge in any subject should not cause restrictions and so for now this problem will be ignored.
Another element that I feel is important in this redeveloped idea is that the sound is in some way based on the networks between the people whose comments are used.  This is because social networking visualisations are very focused on the networks and communities created and so to create a sound of social networking in response to the number of visualisations I need to include the main content of the image based versions.  However I am not sure how this would be achievable at all as the networks between people are not obvious enough for a system to know about them as it creates the sound.
The next question to answer is how is the sound going to be created? The main plan for this was to use the text written by participants but there are other options also.  There is the addition of a comment in itself, the participants name, and the time of submission that could be used.  However the text written will be the most dynamic and so the most interesting to use.  This decision relies on how complex a piece of work I wish to create as the text will make the sound development more complicated whereas something like date and time will be easier to manipulate.  The main problem I am struggling with here is that I want to include an organic process as the major section of my piece as this is what makes digital art most different to previous art.  If I just use something like the date and time of a post there is no process other than the conversion to sound which is not an artistic process but a technological one.  Whereas if I choose to get each participant to carry out a process of sorts with a piece of text for example the process is there and so it will be part of my project.  This is what I would prefer to do, but therefore I have to develop a process that has a context for the project and is viable in terms of what I am trying to do.
Dadaist poetry relies on taking a piece of text and randomly re-organising the words to create a new piece of text.  One option therefore is to kind of replicate this, for example providing a piece of text and getting one person to change a single word and then pass it on.  Another option is to provide a piece of text which the user has to randomly re-order and then passing this on to the next person to re-order again.
I feel that the process I use should somehow comment on the freedom of thought achieved by social networking sites and how this develops communication.  Social networking currently is focused on portraying how you are feeling at the current time, and updating this as often as possible to keep the information current, in a way it aims to portray your digital personality and this key feature based on thoughts, emotions and opinions should be somehow considered in my project process.  This leads me back to my previous process to some extent as the point of this was that each participant posted their own comment (opinion) on the previous persons comment.  The final outcome of this would be an organic development through peoples opinions and thoughts. 
There is also the option of not providing the participants with much information at all other than that they should post something as a comment.  This is an interesting idea as a lack of communication about what to do will hopefully still provide some form of communication between the participants, demonstrating how social networking aids communication, whilst metaphorically incorporating the concept of the reduction in communication caused by social networks.   These communications in terms of text can then be taken and used to manipulate and add to the sound file.  This also incorporates some elements of freedom of thought and opinion as it is left open to the participants as to how the content transforms.
I quite like this idea and so after much deliberation through various problems this will now become my re-developed construct for my project.  The purpose will be to demonstrate how social networking aids communication.  The process will be the organic development of communication achieved purely through the participants own opinions and thoughts on content that has been added previously. The sound will hopefully be transformed as the communication is changed so that there will be a performance element.

Sunday, 21 March 2010

IDAT211 - The Game Design

The game for application will narrative based.  The game itself will be set in a environment that is similar to the real world so that the learners are able to relate to the system.  As well as this the narrative will allow users to make choices as to the elements they learn and so the path they take through the learning process will be self directed even if the user chooses not to chapter select.  This does begin to stray away from behaviourist learning as it is more self driven alike to cognitive learning, but by including certain elements like this it makes the game more interactive and so more engaging to the learner than having the process chosen for them.  The following game design is for a section of the game that focuses on the major scales section of learning for grade 1 music theory.  This sits in the middle of the learning process for music theory and so would also fit in the middle section of the overall game.  Although users will be able to choose certain paths in the game, the game will not be too fluid as otherwise the user may start with something too hard, or come across a subject that is too easy for them at their current stage of learning.  All interaction within the game will be controlled through use of the MIDI keyboard alike to that of the general interface.  To ensure that the MIDI keyboard controls do not conflict between the game and the interface the interface has a pause/continue function which will lock the other interface areas so that only this button will be important to relocate on the keyboard away from the keyboard keys used in the game.  Below is the game flow design, to see any of these images in a larger form simply click on them:
Starting screen
Initial Game Screen
Question 1
Learner answering

Learner answered

Next question
Learner answers wrong

(this would then go back to the screen above this)
Learner then reanswers correctly

After the other scales have been done correctly in the same style as those above the game is completed:

Next section of the environment and narrative


In these designs the character has not been designed as it requires complex imagery, but the basic format of the game is as it would be in the actual application.

Wednesday, 17 March 2010

IDAT 211 - System Flow and Wireframes

Below is the beginning of the physical designing process for our elearning tool demonstrating the flow of the system and also the basic wireframes for the screens of the tool.  To see a larger version of any of the images below simply select them:

System Flow
Wireframes
Main Menu:

Menu with help etc box:


Game Screen:


Game Screen with help:

Free space screen:
This design for the free space screen is just an idea as we have not fully tied down what will be shown on this screen to make it functional whilst also in a free play situation.

IDAT 211 - Conceptual Development

The Application
The application will be a music theory application aimed at providing users with a more informed understanding of music and music theory.  In order for this to be possible the application will provide learning games by which learners can practice music theory.  The application will be MIDI based and will use the MIDI keyboard as the interface to the application, rather than a keyboard or mouse.  This will make the application more unique and engaging to the user whilst also promoting and developing the users understanding of music. This will ensure we match with the expectations set out in the needs analysis of the users.

Design and Learning
The main feature for learning will be the games which are played in the application related to each element to learn.  The games will follow some form of narrative which allows development through the learning, and this will help to make the games more fun for the learners.  The learner will be given a question based on a topic of music theory, for example reading the staff and notes, which they will then have to answer using the MIDI keyboard as their interface.  If they get the question right the narrative of the game will progress acting as a positive reinforcement for behaviourist learning, whereas if they get it wrong the narrative will not change (a negative reinforcer) and they will be asked the question again.  The use of the MIDI keyboard as the interface for these games means that there will be a greater development in pratical learning as well as theoretical, as the learner will be carrying out both whilst using the program.
By having this interface it also means that the learning tool better caters for the all the main types of learner, visual, auditory and kinesthetic.  Visual learners will be able to learn through the on-screen imagery, hints and help given throughout the tool.  Auditory learners will be able to hear the note or sound they are trying to learn and compare it through the sound of the MIDI keyboard.  Finally the inclusion of the MIDI keyboard will cater for the kinesthetic learners as they will be working practically with the keyboard to use the tool and find the answers.  This demonstrates our design to be a well rounded tool in terms of the way people learn, and also shows that it should be easy to use for any type of learner, another expectation outlined in the needs analysis.

Features
Below is a diagram of the features we plan to include in the product covering persitent features, technological features and variable features (features that are not persistent):


Context in which the tool would be used
The most common use for this application is likely to be alongside some form of music tuition be it individual or curriculum based.   It will therefore be software or an online tool which will be used by schools in the majority to support musical learning and aid development of understanding.  It would be good for the tool to be online based as this is more accessible to any learner whether they are at school or not, but by incorporating the MIDI keyboard element to the software it makes more sense for the application to be software based which can be purchased and installed onto a computer, particularly as part of a school's computing system.
The tool should be used alongside external music tutoring as it can act as supporting material to the external teaching and visa versa.  In this situation it would be expected that the tutor would select elements which the learner would have to go through in the application to support the current learning, and the results of this learning would then be given back to the tutor as feedback so that the progress of the learner could be gauged.  However it would still be possible for the application to be used in a self-tutoring situation if necessary but progress of learning may be slower. 

Feedback
As well as allowing for feedback to be provided to the tutor on a students learning, the learner will also get feedback from the system throughout the tool.  In particular a scores section incorporated into the features of the tool will allow the learner to view how successful they have been on each section and so they will be able to gauge where improvements need to be made in their learning. 

Look, Feel and Related Work
When first looking at the design of the tool we instantly felt that colour should be incorporated to make the learning of muscic theory more interesting.  The first kind of aesthetic we therefore considered was a weather based aesthetic including a rainbow based key colour format.  The aesthetic would then develop to look similar to that chosen by Vimeo shown below:
Other cartoonistic approaches similar to this are used for games aimed at the target audience we have chosen such as BBC Bitesize:
and Zoombinis:

Both bitesize and Zoombinis are very successful e-learning tools and so it could be suggested that the similar look and feel of these games played a part in their success.  It would therefore be good to design our tool to have a similar look and feel to these. In particular it seems that these tool rely on a character type that can act as the learners guide to the tool.  In bitesize the character is the fish, whilst in Zoombinis the character is the Zoombini.  It may be particularly important therefore to consider incorporating a guiding character into our tool to make it more consistent and to allow the learners to relate to the tool more easily.

In terms of related music theory learning tools there is elearning music theory as described in this post: http://beckyvidat.blogspot.com/2010/02/idat-211-elearning-music-theory.html.  There is also a whole range of small applications available online varying in quality.  Most however are much more powerpoint based pieces writing out the theory and requiring little or no interaction from the learner let alone making this engaging.  For example musictheory.net is quite similar to Elearning Music Theory.  It has very basic interactive training tools (shown right) and a lot of reading based learning elements that require no interaction at all.  Although this site does provide a large range of different theory based tools and information it has a very poor quality of engagement and has no end target or encouragement. In testing this tool I became instantly bored with it and so this would definitely not be successful for teaching younger audiences.  This overview is very much similar for all the other music theory pages I have come across. Athough there are a few music theory games out there to teach music theory in a more fun way these seem quite outdated or are not particularly engaging.  For example there are a whole range of music theory games on http://www.musictechteacher.com/ but these are inconsistent in theme and varying in engagement.  I therefore feel that there is a definite space for a tool such as ours which takes a new and more dynamic approach to learning music theory.

Related Literature
To come soon...

Tuesday, 16 March 2010

Stonehouse - Controlling our projection

Once we had decided that our arduino board was going to control a projection in our Stonehouse location, we then needed to look at what information in particular could be captured from the arduino that could control what the projection looked like.  There were a number of options for this as listed below:
  • microphone info - projection of sounds
  • time between uses of crossing - track times between light sensor triggers
  • amount of uses of crossing - track number of times light sensor triggers
  • motion sensing - number of people passing through
However being that our aim for the projection is to encourage people to use the crossing properly more often we felt that the best way of doing this would be to use the times between the pedestrian light last being switched on. The shorter the time between presses the more the projection would change and therefore this would encourage the pedestrians to press the button more often, and so hopefully use the crossing correctly more often.
This does not mean that the other options for manipulation of the projection could not also be used alongside the time based one but that the time based one would be the main control for the projection and the others would be used to provide extra data to the system to make the projection more versatile.
The next step in implementing our final project is to come up with a working arduino to flash system and then work out what our projection is going to do and how this is linked to our overall idea.

Saturday, 13 March 2010

IDAT 204 - AR model development

Once I had obtained the information that I needed to start building my solar system model I began constructing it in Blender.  By building it in Blender it meant I could export it to the correct Collada format for the FLARToolKit.  I therefore began working out how I could build the model for it to be proportional in size and distance.  I used the Blender grid to work out proportional sizes based on the information facts I had found so that every 10 squares in terms of distance counts as 1000 million km, and every square in terms of size counts as approximately 35000 km.  Therefore the ratio of the model is 35000:100000000 km or 35:100000 km.  It is a shame that my model cant be completely in proportion so that it represents the solar system precisely, but the enormity of the distances between the planets in comparison to the sizes makes this impractical for the project.  This is the second best option and should hopefully still be quite representative.
As well as sorting out the sizes to be proportional I will also have to decide on how I want to represent the speed of the orbits as this will have to be represented in seconds or minutes rather than years.  The model therefore currently looks like this:
  As well as having to change the size and distance to be at a ratio to each other, I have also had to shrink the Sun so that it is NOT proportional to the size of the other planets, as it is too big up against the rest of the model to be practical if at proportional size.  There is still a lot of work to be done on this model including particles, textures and animations and so these are the elements I now have to tackle.

Thursday, 11 March 2010

IDAT 204 - FLARToolKit Testing

#bdat To be able to implement my AR orrery I will need to get my model working as part of a 2D matrix marker based AR tool.  In order to do this I can use the FLARToolKit a free Flash based application that uses collada based models and matrix marker recognition to create augmented reality.  I therefore have to make sure I am able to use this tool kit and implement my own models to ensure that my project will be successful. 

The first step in trying to achieve this was to download the FLARToolKit and look at how the code worked in terms of choosing the model and pattern for the AR to work.  This was a basically simple task as the model is selected through a reference to a file path at the beginning of the code.  I therefore then went and produced a simple model in Blender which I could export as a Collada file and then apply to the FLARToolkit code.  The following was the result I achieved:

The test carried out in the video above also included a small amount of animation although this is not obvious from the video.  This animation also worked fine and so the results I have obtained from my FLARtoolkit test are promising for the rest of my project.

IDAT 204 - AR Research into planets

In order to make an informational and accurate AR orrery I will need to know a whole range of information about the planets that will be included and manipulated.  The following is a range of research relating ot this topic:
There are 8 official planets but orreries also often include Pluto as it was classified as the 9th planet until its downgrading in 2006.


For my model I particularly need to know the information on the planets relating to distance from the sun, size, orbital path, time of single rotation, and time for a single orbit.  With this information I should be able to construct a relatively accurate model of the solar system.  I am therefore planning to use the majority of the data in this table below found at http://www.klbschool.org.uk/interactive/science/p_table.htm
 

However if I intend to make the solar system model in proportion to these values I will have to consider carefully how I intend to do this.  For example the distance to the Sun is valued in millions of km.  I will not be able to simulate this at the same proportion as the diameter which would be in km as I would have to model the planets too far away from each other.  I will therefore have to make the distance and diameter separate so that all the sizes are proportional to each other, and all the distances are proportional but the size and distance are not measured in the same proportions.
Now that I have these values I can begin to create the model that I wish to use for my augmented reality solar system.

Wednesday, 10 March 2010

Stonehouse - Basis of our Arduino Project

For the final part of the Transforming Stonehouse work that we have been doing we are required to develop and implement an Arduino based project as our final realisation of the ideas we have been investigating throughout the year.  Being that our group has based our work on light and communication we therefore need to come up with an idea for our arduino project that implements this idea in some way.  We had already looked a lot into what we could project as part of our project and so felt it would be a more consistent idea if we used the arduino in some way to relate to a projection that could be projected in the Stonehouse area.  After looking at the various possibilities for arduino boards and relating this to the elements we have in the area of our site we decided that we would use the nearby pedestrian crossing as the main environmental change that would act as the input for our arduino.  Basically when a pedestrian presses the button on the pedestrian crossing a light is turned on on the pedestrian console.  We therefore felt that we could use a light sensor as part of the arduino to detect this change and trigger a change in the projection. 

The basis behind this idea is that this could be used to improve the efficient use of pedestrian crossings and traffic lights as most often people press a pedestrian crossing button but they cross when they decide it is safe.  This is quite often not when the traffic lights have stopped the traffic and so traffic gets stopped unnecessarily.  By triggering a projection when the pedestrian presses the button they may well be distracted and so will wait for longer at the crossing making it more likely that they will cross at the right time. 

The process that our arduino project will go through is as follows:
pedestrian presses button - light turns on - light sensor sees change and change is sent to the arduino - arduino data is sent to flash app - flash interprets data and changes image accordingly.

The next step in this project development is to consider what and how the projection will change.

Tuesday, 9 March 2010

IDAT 211 - Needs Analysis

The first stage in designing a learning tool is to define the needs analysis of the learners to identify the knowledge already held and the knowledge to be gained by the tool, as well as defining the specific audience for the tool and how the tool will be approach this audience.

Audience and Skills
The subject for our learning tool is music theory, and in particular the basics of music theory that are learnt at the beginning of any musical learning.  By producing a tool for this stage of learning we will be providing a supporting factor to standard music learning that will aim to develop understanding more efficiently.  Firstly we must define the target audience of the learning tool based on the type of knowledge we wish to provide.  The majority of people who would be learning the basics of music theory are children at school who are either learning an instrument or have music lessons as part of their curriculum.  In particular this covers students aged 11-14 especially those in the transition between primary and secondary school in years 6 and 7.  In this age range learners should have a basic knowledge of instruments and will also have taken part in musical pieces, often as singers, but will not have looked particularly closely at musical theory.  It is also common at this age for students to decide to individually learn a musical instrument and so the tool will act as a support for these students in their first years of learning theory. Therefore most of this audience have very little knowledge of music theory and so this tool will aim to improve this.

Expectations and Goals
In using the tool students will aim to increase their knowledge of music theory beyond that which they already have.  In completing the learning provided by the tool the students should have achieved a complete understanding of grade 1-2 music theory and should be able to apply this to various problems.  In participating in the use of this tool users will have a number of expectations for the product which are listed below:
  • Easy to use
  • Engaging
  • Explain a range of theory
  • Be useful
  • Be fun
  • To improve their knowledge
As well as the users themselves there is also the question of the expectations of the possible tutors who will be involved in getting the students to use the software, such as setting particular tasks and applying relevant supporting material.  The expectations of these users are below:
  • Improve students knowledge
  • Give some method of rating understanding
  • Support current practical work
  • Provide challenges
  • Provide help
  • Engage the student in the learning
Our tool therefore has to match all of these expectations for it to be a success. 

Need and Demand
In order to do a needs analysis you also have to look at the need and demand of a product.  Our tool does not have a particularly high need value, but in providing a tool such as this it allows for a better balance in knowledge throughout students of a certain age.  This makes the level of understanding more measurable and so the development of this knowledge later on is likely to be more fair and successful for all students.  In terms of demand this is likely to be quite high if accepted as a viable learning tool as most schools teach music and musical tuition is also very common.

Delivery Format
Being that this is a e-learning design project the method for which the tool must be delivered needs to initially be digital in some form.  However there are still a number of options as to how the material is presented digitally.  These include virtual reality, augmented reality, screen based, projected and dome projected.  We feel that out of these delivery formats the most appropriate for the tool is screen based i.e via a computer monitor.  This is because the tasks and learning will be based on the individual and so this method is the most accessible and usable for this.  Although VR or AR could be used individually we felt that these would be too complicated for learners to get involved with and would also be less accessible.  In particular learning could begin much more quickly with a recognised and familiar environment i.e a screen based interface, and so the tool would be more efficient. 

To summarise our tool will be a screen-based music theory learning tool which targets students between the ages of 8 and 14 who are studying music.  The aim of the tool is to increase the learners understanding of music theory up to grade 1-2 standard and should meet all the expectations both of the learners and the music tutors involved.  Ultimately the tool should be able to allow for a more standardised understand of music theory throughout students making further development of this easier and more efficient for all concerned.