Saturday, September 10, 2005
Hmm, looking at that I am trying to remember exactly what it meant.
Ahh, now I remember.
I am used to developing in Internet Explorer, so whilst coding and testing the App, I didn't give much notice to Firefox. I kind of just assumed it would work.
So basically the moral of the story is that you should always think about cross browser support before you start to develop, it will save you a lot of time in the long run. Think about features that one browser has that the other doesn't. If you do this correctly then you should be able to code around the missing features, much like CSS developers do.
The next version of the application will be completly cross-browser compatible, I might even consider testing it in Safari and the like [I need you, the reader to help me here :)].
That pretty much concludes the Success and Failures series, and it has been a great help to work out the requirements for my next version of the application.
Hopefully one of tomorrows entries will contain the start of formalising some requirements. :)
All this is using the Yahoo API and it works pretty well. For each tag that the I select from the article that is inputted in to the Ajax tagger, a link is shown for each of the posts that contain that key.
Give it a go if you want: http://www.kinlan.co.uk/AjaxExperiments/AjaxTag and use this entry as an example input.
This is just a test at the moment, but some other more important aggregation features will be in the next version of the application.
|Tagger [feed], Ajax [feed], Citations [feed], Api [feed], Keywords [feed], Tag [feed], Search [feed]|
There is not too much to say for this one really. I was hoping that it would generate a lot of needed interest in my web page so that potentially I might be able to earn some ad revenue, but after working on the project a little I have realised that revenue is not too important, just getting people to use it is more important.
Once I am happy with the requirements for the next version I will have to work out some sort of "marketing" campaign, which will really highlight the benefits of this application.
|Ad Revenue [feed], Ajax [feed], Traffic [feed], Marketing [feed]|
MSN are launching API's that enable developers to access their search results accoriding to the Search Engine Journal which quotes an article by the Search Engine Watch. Basically if you go to http://msdn.microsoft.com/msn/ you see a holder page which tells us to come back on Tuesday 13th of September.
Expect me to investigate them and talk about them here and on http://tagger.kinlan.co.uk. I just hope they do something innovative like Yahoo's related searches, Contextual searches and Term Extraction and not just provide access to their search results [which would still be cool though].
|Search [feed], Msn [feed], Tagger [feed], Related Searches [feed], Msdn [feed], Api [feed], Microsoft [feed], Search Engine [feed]|
Friday, September 09, 2005
As in my previous post, when I was creating the application I had grand ideas about the people who might use this application. I had visions of thousands and thousands of people using it and having to ring up Yahoo to ask to have my 5000 daily query limit increased.
In actual fact, I was really the only person who used the application. Even though I was posting about it in my journal I didn’t receive any feedback at all from the public.
This got me thinking. Even though the application is primarily for my benefit I would dearly love other people to use it. I would also need to get people talking about it. I would also need all the information to be available in one place (regarding the AJAXTagger).
My first step to more exposure is this Blog. All my content will still be aggregated on http://www.kinlan.co.uk, but this blog is designed to deliver all the information about the development and use of the application in one simple place.
My next step after this will be to promote it. I need people to start talking about it; I need people to investigate the functionality it offers; I need you to use it, break it and criticise it!
Do you have any ideas how I might do this? Email me or leave a comment. I need the feedback to make this application better for me and for you. Every comment and suggestion will be taken on board.
|Feedback [feed], Blog [feed], Ajax [feed], Suggestion [feed], Journal [feed], Ask [feed]|
To be brutally honest, when I had this idea for the AJAXTagger I think I thought I would solve my problems with tagging my posts and also everyone else's problems.
The idea is a simple one: to provide the user with tags that Technorati can pick up without the user having to manually handcraft each tag; additionally each tag would be auto picked based on the context of the journal entry.
There were some other features I wanted in the app based around related searches, citations but they had to get pulled.
As I was creating this as my first ever AJAX, XMLHttpRequest application I knew that it wouldn’t be great, but I still had delusions of grandeur.
In the end the only person it helped was me. Looking back, this is absolutely fine by me. It helped me not get bogged down by all the extra leg work to make my blog a little more special. It helped me enjoy Blogging.
So….. [I am trying to think why it was a failure, when in actual fact it was a very good learning experience].
I think the next version of the application will be more focused around what I need, but with an eye on what extra value it might provide other users. If it adds extra value to my readers then that is all that matters.
If you have any features that you would like to see, email me or post a comment.
|Delusions Of Grandeur [feed], Technorati [feed], Ajax [feed], Tag [feed], Xmlhttprequest [feed], Tagging [feed], Citations [feed], Blog [feed]|
If you were to look at my application without me explaining what it did, you would not understand what it did and what it was supposed to achieve. I am a great beliver in intuitiveness and the ability to understand what an application\product is supposed to do with very little prompting from a user manual.
If you look at Google's site, you know you are supposed to use it to search the Internet and that when you try to search you know that you are about to do it….. If you get my meaning.
The AJAXTagger wasn’t in the slightest bit intuitive, it wasn't obvious what it was supposed take as input and neither was it clear what it should produce as an output. Through each stage of interaction with the application it wasn’t clear what the user was supposed to do to move on to the next stage.
I asked a couple of my friends to try it, and they got stuck at the first page (the page where you insert the text that you want tagging). They didn’t see the point (until I showed them a Demo). Once they started using it, it was quite simple for them to see how to use it.
I found it really useful. From taking about an hour to do a complete post (including Technorati Tags) it now only takes the time it takes to write the entry; about 30 minutes in total.
The next version of the application must be intuitive. It must be obvious what the program does, what it achieves and what user interaction it expects. Not only for my use, but if I wanted to promote the application and get as many people as I can to use then it must work and work well.
Some of this can be solved by some hints and tips in the application; others can be solved by better visual queues (maybe including a sample document to tag – to show the user what is expected as an initial input) and perhaps other problems can be solved by a more insightful UI. Finally, better user documentation would be needed.
If you have any recommendations about making the AJAXTagger easier to use and more insightful please email or post a comment. I will respond to every comment.
|Technorati [feed], Google [feed], Intuitive [feed], Search [feed], Queues [feed], Tagging [feed], Ajax [feed]|
Thursday, September 08, 2005
|Blog [feed], Html Application [feed], Tagging [feed], Ajax [feed]|
So being a UK Blogger, I promise that I will upload an OPML File so that you can know who I read. :)
I think I am starting to get the point of OPML. Well, the basics of using it in an RSS Feed and Online Journal. I will just be using OPML to show a list of feeds and how I have catagorised them.
I am also tempted to provide a part of the site where I explicitly just link to other interesting articles that I read in the feeds that I subscribe to, along with a small comment.
|Bbc Radio 4 [feed], Rss Feed [feed], Web Logs [feed], Blogosphere [feed], Blogger [feed], Shoptalk [feed], Current Issue [feed], Feed [feed], Online Journal [feed], Journal [feed]|
Tuesday, September 06, 2005
But it didn't.
This entry is entitled "It didn't work to quickly (but it did lead to success number 1, 2, 3, 4, 5 ..... :))"
The very first iteration of the application included some Technorati Stats for each tag, so you knew whether it was worth creating or not and also related searches for each of the tags provided by the TermExtraction API.
Each of the above two proccesses where Sequential (thus only JAX and not AJAX). For each Tag it would perform two queries, thus meaning that the overall speed of my application was limited to:
(time of technorati request + time of yahoo related searches) * number of keywords selected by the TermExtraction API.
Yahoo was pretty quick; Technorati's performance was diabolical.
I eventually pulled the Technorati API, and stopped using the Yahoo related searches. The reason I pulled them rather than reworking the app is at the time I didn't want portions of the page to be trickle filled, I wanted the page to be fully completed before the user viewed the results. Having a fully rendered page negates the benefits of AJAX principals, I might as well have had the server generate the complete page.
The next version of the app, will be fully async, because there will be a built in request manager. Thus the results will trickle in and be parsed and displayed as they arrive. When one result is in, it may trigger other queries that will be completly managed in the background.
To see what I mean, play around with my AJAX Technorati Tagger: Simply enter some text, maybe a news article or one of your journals and see what happens.
|Related Searches [feed], Ajax [feed], Api [feed], Technorati [feed], Bandwidth Problems [feed], Queries [feed], Tag [feed], Async [feed], Diabolical [feed], Jax [feed]|
Monday, September 05, 2005
If you take a look at my application you will see that it looks diabolical. I realised about five minutes into my experimentation that I really need to improve my design skills. I thouhgt initially that it might have been the tools I was using, but as the old saying goes "A bad worker blames his tools".
I then realised that whilst I am quite good at programming, I sometimes lack the designers eye; that is to say I have a vision but not the skills to implement the final vision.
I can conceptualize the code easily and the implementation, it is just the UI that lets everything down.
The reason that this was the case is that the implementation of the software was highly dependant on the UI. The UI drove the application code. The next version of the software should be highly UI agnostic. The code that calls the web services, should not rely on the HTML elements, rather it should rely on the data structures present in the application. The UI should interrogate these data structures to determine what information to display.
In essence, the next version of the application should be tierd better. The UI and the Logic (Business Logic if you like) should be loosly coupled together. I should be able to change the Logic without affecting the page and likewise, I should be able to change the UI without having to alter the AJAX logic etc.
This would allow me to concerntrate my efforts on indiviudal parts of the application at any one time. So I can develope an AJAX framework, the business logic and the UI all independantly of each other.
|Business Logic [feed], Ajax [feed], Agnostic [feed], Data Structures [feed], Vision [feed], Tools [feed], Application Code [feed]|
Does anyone know any good, free site searching software. If you do drop me an email: email@example.com or leave a comment.
|Search Engine [feed], Site Search [feed], Search [feed], Searching Software [feed]|
Sunday, September 04, 2005
It turned out differently however. The actual implementation suffered from lack of foresight. As soon as I created what I thought was my ideal solution [see above], I ran across problems running the AJAX code in Firefox. I also then realised in a high security environment in IE 7 and Internet Explorer 6, data binding across data sources is disabled.
To get around this I had to create proxy scripts [here and here]on my server that the client page would call (because it is on my domain name IE and Firefox would allow this). All that the client scripts do is to pass a request that the client makes onto the correct web service. [see image below]
There is an added benefit to using a proxy script; you can hide any secret information that should not be available for the client to see, things such as the developer token that Technorati requires.
The major downside that I see and the point of this entry, is that I have to handle every client request to the desired web service rather than having the client manage the request. Thus increasing my bandwidth demands.
- Hides security information needed in some web services.
- Will allow the developer to monitor the requests that I a client would make.
- Proxy scripts would allow you to merge requests and perform any kind of data manipulation on the script before it reaches the client.
- All data is passed through the server, thus using extra bandwidth.
- Requires proxy scripts to be created.
- Proxy scripts may be insecure and also may take up too much server bandwidth
I fully intend to support in the next version systems that allow cross domain data sources, because it will greatly help my bandwidth demand situation. But there may be some situations that I need to perform multiple calls to web services in one single call to my proxy script.