Monday, September 28, 2009

Homework 6

1.
a.
  • When will cars be fully automated? http://www.dailymail.co.uk/news/article-393401/The-self-driving-Golf-Herbie-run-money.html
  • Green energy technologies. http://www.greenchipstocks.com/articles/jatropha-biofuel/450
  • Future of markup. http://xml.coverpages.org/coombs.html
b.
  • The car has great precision and crash avoidance, including "sat-nav, collision avoidance sensors and anti-lock brakes." It can even go up to 150 mph, though it seems irrelevant.
  • The jatropha plant can grow in arid conditions and spreads rapidly due to it being a weed. It also is inedible, so it does not compete with food crops. Perhaps it's a better alternative to petroleum than corn ethanol.
  • Describes markup use for scholarly purposes, reasoning against procedural markup -- the old way -- and descriptive markup -- i.e. XML--like.

2.
a. Markup language could branch out into being a descriptive language not only for readable text data but also for images, audio files, etc. E.g. have all the faces in various pictures selected and given the tag "face." Then one could go and search "face" on the picture and have the portions of the picture which contain faces be highlighted.

b.
Local quality
Change an object's structure from uniform to non-uniform, change an external environment (or external influence) from uniform to non-uniform.
(instead of procedural markup, replace it with descriptive markup of tags, allowing for better interpretation of data from the author's perspective.)

The other way around. (instead of searching for a particular item in a data set, make inferences by looking at what items you are presented with and use those.)

Partial or excessive actions. (be verbose, don't leave much ambiguity to the end-user.)

Feedback. (use user queries on similar data sets to determine if different data can be reconciled to reduce confusion in finding different but equivalent data.)

Cheap short-living objects. (for data that is produced and sent relatively quickly that is going to be integrated into a larger series, mark it up quickly for the tags will be replaced once in the larger set.)

Discarding and recovery. (retain mark-up data even after it has been processed, for reference or backup.)

Merging. (combine mark-up'd data with visual representation through logical relevancy for a quick overview or understanding of data.)





Saturday, September 19, 2009

Homework 5

1. I am using Intrade with the username Phantasyfin.


2.


  1. A federal government run health insurance plan to be approved before midnight ET 31 Dec 2009

  2. Average Global Temperature for 2009 to be among five warmest years on record

  3. US Economy in Recession (*see contract rules for definition*)

  4. The US Economy will go into Recession during 2009

  5. Microsoft Windows 7 to be released on/before 31 Dec 2009

  6. United States to conduct overt military action against North Korea on/before 31 Mar 2010

  7. USA agrees before end of 2009 to reduce CO2 emissions by 10% or more by year 2025

  8. Jennifer's Body to gross OVER $5.0M in opening weekend

  9. A cap and trade system for emissions trading to be established before midnight ET on 31 Dec 2011

  10. Osama Bin Laden to be captured/neutralised by 31 Mar 2010

  11. Venue in North America to host the 2016 Summer Olympics



3.


  1. Too high

  2. Too low

  3. Too low

  4. Too low

  5. Too low

  6. Too high

  7. Too low

  8. Too low

  9. Too high

  10. Too low



4.















































































#shares price per share ($) total per market ($)
a. 0 0 0
b. 20 5.45 109
c. 30 9.68 290.4
d. 15 9.8 147
e. 65 0.9 58.5
f. 0 0 0
g. 70 1.2 84
h. 10 2.5 25
i. 0 0 0
j. 52 5.49 285.4




Total: $999.38




The letters a through j correspond to the listed market predictions in number 2 of the the homework.

Monday, September 14, 2009

Homework 4

1. My original question was "When will a standardized markup language be implemented for data?". This was very ambiguous and was interpreted very differently than what I had envsioned. Some ways it could be seen are "So all data is being marked up and a standard set of rules is in place for this data. It seems highly rigid and unflexible, something difficult to implement and too cubersome to be efficient or useful in forwarding human knowledge." I would reword my question, in that I would specify that not all, every single string of data being made would be marked up. Also, there would be a set of rules to follow, but these would allow the marker to work with them and create his own tags and nesting if he so chooses. Basically, the standard would emcompass pertinent information, that which would be more useful to end-users if it could be easily accessed and searched through by means of an intuitive index. It just seems very interesting to me that a document spanning hundred of pages or maybe a hundred documents 1-2 pages long could be programatically filtered and output desired information, rather than a user having to wade through familiar, irrelevant, or "fluff" information. I can't reword my question, though, without the assumption that the receiver of the question at least has some common sense. E.g. one commented "but stuff written on napkins won't be marked up", that's just silly, of course things written on napkins won't be marked up; only relevant information, that which is important enough to warrant its examination or review, must be marked up or else the process would be counter-intuitive. By counter-intuitive, I mean that marking up trifle would take more time than it would save in the whole scheme, in my opinion at least. Only when computer processing and data accumulation reach a much higher power and the need arises for much greater data accumulation and processing would marking up all information would practical. I hold that explicit is always better than implicit. If everything can be analyzed logically, then there can be drawn more easily and efficiently informational ties and unity. If there is room for error, some backtracking may occur or any amount of backtracking may not solve the problem, leading to human intervention for testing ambiguity: ambiguity only resulting from human implicit thought. However, I think, that by the time computers would be able to mark-up and systematically analyze trife, then they should also be able to think rationally, as a human. If a computer has perfect unity and flow of information, it would only accumulate more and continue to grow and store empirical data.

2. For my project I would obtain a series of unmarked information. I would proceed to mark it up in XML and then parse what information I desired. I could calculate the time it took to have regularly searched the documents and the time it took to parse the documents. I would not include the time it would take to program and markup the information in my stand-alone case. However, I will try to find if this would cause an increase in efficiency of getting the information one needs from the documents. My theory is that it definitely would not be more efficient to program this for the documents if I only had a series spanning only a few pages and if I was the only one viewing the documents. However, I think that the more people who need information from the documents coupled with the increasing size of the information load, would increase efficiency exponentially, reducing time for each of the participants. The next step would be to search for documents that would suite my parsing plan.

Tuesday, September 08, 2009

Homework 3



1. Here's two graphs of the question results. -1 represents "Never"


2. One key difference in the Delphi method is that the responses and comments are completely anonymous. In class, we could see quite clearly who has given his response and in what way he has communicated it. Too, members of the group can revise their previous statements at any time.


3. One key weakness I find is that topics chosen to forecast are so diverse among the people in the class. It is unlikely everyone will be learned in what the other is interested. There's also a bandwagon effect from a lack of anonymity. E.g. a person more viewed more respectable may influence a vote in his favor while someone in disfavor could push others away from his vote. Anonymity could be developed in something such as a chat room with each member entering having a randomly assigned username. Ignorance of topics could be somewhat eliminated as well if a consensus is met on generally-known topics throughout the class.





Tuesday, September 01, 2009

Homework 2

1. I will make a prediction of the future of the HIV/AIDS virus. Currently HIV is growing at an exponential rate, increasing as the population of the world increases. It is also predicted by the UN to continue to grow even in 2025 as pictured:



































However, it seems that the rate of growth is diminishing and will eventually level out. Too, the number of people without HIV is rising while the number of people with HIV is lessening in comparison to the number without HIV. This would lead to the conclusion that the rate of people with HIV would eventually even out, creating an S-curve. After the rate eventually flattens out, then possibly the rate would go down, creating a plateau curve. I predict it will be a plateau curve because of medical advances and HIV awareness/ prevention. Since HIV is prevalent around the world, a global effort has ensued to eradicate it. Maybe man will be able to stop it, but its future is uncertain.


2.
a.It would take about 12 years to increase productivity if the number started at 1440. By the twelfth year productivity would be 2897.563.
b. The percent per year increase is 41.421356235% if it doubles every 2 years.
c. The percent per year is 58.7401052% if it doubles every 18 months.
d. Assuming I started out at $1200 in my account, by the end of the 35th year, after interested had been accounted for, I would have doubled my amount to $2400.