Wednesday, December 12, 2007

Secondary protein structure prediction

Secondary structure means?

In biochemistry and structural biology,secondary structure is the general three-dimensional form of local segments of biopolymers such as proteins and nucleic acids (DNA/RNA).
It does not, however, describe specific atomic positions in three-dimensional space, which are considered to be tertiary structure.



Protein Structure Prediction

-One of the most important goals pursued by bioinformatics and theoretical chemistry.

-Aim is to predict the three-dimensional structure of proteins from their amino acid sequences, sometimes including additional relevant information such as the structures of related proteins.

-It deals with the prediction of a protein’s tertiary structure from its primary structure.

-High importance in medicine (for example, in drug design) and biotechnology (for example, in the design of novel enzymes).

Some Examples of predictions are:

-Ab initio protein modelling
(Ab initio protein modelling methods seek to build three-dimensional protein models "from scratch", i.e., based on physical principles rather than (directly) on previously solved structures.)

-Comparative protein modelling

o Homology modelling (based on the reasonable assumption that two homologous proteins will share very similar structures.)

oProtein threading (scans the amino acid sequence of an unknown structure against a database of solved structures)

-Side Chain geometry prediction.
(Even structure prediction methods that are reasonably accurate for the peptide backbone often get the orientation and packing of the amino acid side chains wrong.

Methods that specifically address the problem of predicting side chain geometry include dead-end elimination and the self-consistent mean field method. Both discretize the continuously varying dihedral angles that determine a side chain's orientation relative to the backbone into a set of rotamers with fixed dihedral angles. The methods then attempt to identify the set of rotamers that minimize the model's overall energy. Rotamers are the side chain conformations with low energy. Such methods are most useful for analyzing the protein's hydrophobic core, where side chains are more closely packed; they have more difficulty addressing the looser constraints and higher flexibility of surface residues.)

MudPIT

Introduction
Before Multidimensional Protein Identification Technology (MudPIT) came about, Liquid Chromatography (LC) and Mass Spectrometry (MS) are used separately to fractionate and then identify protein composition from biological sample.

Disadvantage of using Liquid Chromatography
- Loss of material that commonly occurs in chromatographic processes

Disadvantage of using Mass Spectrometry (gel-based)
Although gel-based methods are widely used when it comes to the identification of protein, this method has several drawbacks:
- Problem identifying hydrophobic proteins
- Difficulty detecting low-level proteins (dye staining is not sensitive)
- Long experiment duration
- Inability to be automated
- Biological sample need to undergo solubilization

These problem decrease the sensitivity of the protein identification process and MudPIT seeks to address these problems by improving the separation and identification of proteins.


So what is MudPIT?
Multidimensional Protein Identification Technology, or MudPIT is a largely unbiased method for rapid and large-scale proteome analysis by multidimensional liquid chromatography, tandem mass spectrometry, and database searching by the SEQUEST algorithm.

Advantage of MudPIT
- Eliminate the problems of gel-based approach when it comes to MS
- More sensitive and thus able to detect low abundance proteins
- two-dimensional chromatography technique reduces sample loss


Workflow of MudPIT
1. Preparation of protein sample
2. Digest protein sample to peptides
3. Peptides are then seperated into two liquid column chromatography steps:
-strong cationic exchange
-reversed-phase high performance liquid chromatography (HPLC)
3. Acquire tandem mass spectra of peptide
4. Search mass spectra against a protein sequence database
5. Identification of protein in the sample using SEQUEST



Some projects using MudPIT:
1) Protein pathway and complex clustering of correlated mRNA and protein expression analyses in Saccharomyces cerevisiae

2)Food Standards Agency
This site is researching on the feasibility of using MudPIT as an alternative to gel-based approach for the rigorous safety assessment of GM plants.

3)Chloroplast proteomics: potentials and challenges
This is a site on botany research, this research is about the analysis of chloroplast proteome.


source:

Technologies and Strategies for Reseach and Development

DRUG Discovery & Development

http://www.hupo.org/educational/past_congresses/2007_seoul/3_MacCoss_color.pdf

Nature Publishing Group

posted by Alvin

Teritary Structure New Break Thru...

For Years.. Bioinformatician has been trying to solve the great mystery of the structure of a protein by prediction
In the past.. experts used to say.. To determining the shape of a protein!, is just a matter of firing X-ray Beam @_@ at its crystalline form and measure it... and!! chemists are way too sceptical to prove him WRONG!
Until recently .. recently...
Ah AH!! on 14/10/2007!~~ David Baker!,
A biochemist at Uni of Washington, and his colleagues found astonishing result! to prove what the experts sayd WRONG!!.. breaking the scepticism in the chemist's world!
Baker came out with a new techique, which combines information from the sum of
By : what is already known about the structure with the vast com - power available
{ what this means? => getting 150k of volunteers to use his program at home }
Soo.. how this program works?
In Basic :1) Breaks the protein sequence into small stretches
2) Match the stretches with all the known protein structures {Logically, if a result = very 100% = its also be 100% accurate}
3) minimize the free energy of the structure so to measure its stability!
4) Repeat all this steps Over and Over Again UNTIL! it lurch towards an ever more Accurate Protein Model!~ Woo{Theres too much data in the world of the web, so if they could get everything done, which will take quite sometime..result would be darn good.}
And this program is called....BOINC

So In full, Its like u take a sequence, with 112 amino acid protein for example,the Network will breakup it up into several million structures, "with some very long time"this Millions of structure will then be whittled down to 5{6 Zeros digit to 5... @_@!}
until 1 structure is found, it will then relate its structure with its determined structure frm its crytsal
But.. Even after this..., result might not be as precision as the cyrstal, BUT!least its good enough to "throw away" the X-ray techique.
Even so... theres still room for improvement,but for now.. the structure prediciton would be the prospect of custom-made proteins.which is used to hunt for sequence that correspond to the desired structures.
Now.. the team os currently redesigning the gp120 Protein of HIV,hoping to make a vaccine that could stimulate the immune system in a different wayfrom the nature of the virus,if this is possible.. "the reshaped protein should attack the virus more effectively then antibodies created"{If Success, i believe.. human, will no longer regard the danger HIV will cause, after all.. our heart contains 7 deadly sinson the other hand, innocent child, might be saved.... }

Baker's Words.."The days when protein modellers thoughtthey could make crystallization obsolete arelong gone"

“If you reallycare about the structure of your protein, youshould get some experimental data and combineit with modelling"
So, by the way things are progressing, "light" will soon be seem by us..

For more info : Rosetta@Home
For more info on Ppl helping out with BOINC : http://www.youtube.com/watch?v=GzATbET3g54

Bye Bye!
Post by Zhong En

Sunday, December 9, 2007

European Bioinformatics Institute (EBI)

About the Institute

An organisation which forms part of the European Molecular Biology Laboratory(EMBL).

- Provides research on bioinformatics

- Manage database of biological data

http://www.ebi.ac.uk/

About the Research groups

Having more than 25 groups at EBI, only abt 5 or more are research groups.

http://www.ebi.ac.uk/Groups/

Research group includes Bertone group, Luscombe group, Huber group, Thornton group and more.

Firstly, I ll talk about the Huber group, it's research focuses on the transcription of gene and the binding of protein and DNA with DNA microarray.

This group will provide aid in the understanding of functional genomics data.

http://www.ebi.ac.uk/huber/

The next group I ll talk about is the Luscombe group. Which focus on genomic analysis of regulatory system.

http://www.ebi.ac.uk/luscombe/

This group actually studies on how the biology of an organism look like by investigating on the cells of the different species, the expression of its gene, the production of protein, etc.

http://www.ebi.ac.uk/luscombe/research.html

Current status of the research:

  • Provide graphical models for understanding the relationship of the regulatory network.

  • Identifying the vulnerabilities of the regulatory network that is prone to diseases.

  • How the transcriptional regulatory network interact with other cellular components.

  • Analysing transcription factors in human genome.

  • Complex bacteria behaviour

Future enhancements:

  • Advancing analysis techniques and better understand regulatory network

  • Consolidation of their research in bacteria and organism.

  • Interacting with research groups performing genome-scale experiments.

Lastly, about Thornton group, they research on how biology works at molecular level, this has a very broad research.

http://www.ebi.ac.uk/Thornton/research.html

One of the many research in this group is the enzyme activity, which is study of how these enzymes work, their functions, and how they are evolved.

Current status of this research: http://www.ebi.ac.uk/Thornton/group_publications.html




Thursday, November 22, 2007

Ludwig Institute for Cancer Research

Ludwig Institute for Cancer Research

So, what is Ludwig Institute for Cancer Research?
Who is the founder of this institute?
why does the founder want to set up this institute?

Well, Ludwig Institute for Cancer Research(LICR) is a non-profit organization which was set up in 1971 by the American business magnate Mr. Daniel K. Ludwig (the picture on the top right hand corner).

It’s aim is to control and destroy cancer by mastering it so as to relieve the human suffering caused by cancer.

Basically, LICR is the largest international academic institute with nine Branches in seven countries across Australasia, Europe, and North and South America, and numerous Affiliates in many other countries.

WOW.. LICR is going to set up its first Asian branch in Singapore

Just got to know that a new research centre will soon be opening in Singapore by the Ludwig Institute for Cancer Research (LICR). It is going to be the first Asia branch which collaborates with 3 Singapore institutions. A.STAR, YLL-NUS and Duke-NUS GMS.
This is going to be a world class laboratory research. LICR and Singapore is going to narrow the gap between the laboratory and the clinic and they hope to bring the research discoveries to human benefit.

You can read more about the news here: http://www.licr.org/C_news/archive.php/2007/11/04/ludwig-institute-for-cancer-research-to-set-up-its-first-asian-branch-in-singapore/

Immune system can drive cancers into dormant state

We know that the scientists have been working for years to make use of the immune system to get rid of cancers. This technique is known as the immunotherapy. But a new finding proves that, when the immune attack could not kill the cancer, the cancer will find all sorts of ways to contain itself in the immune system. This explains why some tumors seem to stop growing and go into a lasting period of dormancy.

You can read about the study in the following link: http://www.licr.org/C_news/archive.php/2007/11/19/immune-system-can-drive-cancers-into-dormant-state/

REF LINK: http://www.licr.org/

Genomics

Green Alga Genome Project
This is one of the interesting Genome Project that most people might be interested in. One of the significant alga is the "Chlamydomonas reinhardtii". It is known to the research community as Chlamy. It is uniquely associated with carbon dioxide capture and generation of biomass. As fossil fuel have their limitation in supply, biomass is getting more and more popular as it also produce lesser greenhouse gas emission when using appropriate agricultural techniques and processing strategies. Many functions of it associate with a human helps in the basic understanding of certain human diseases.

Source: http://www.jgi.doe.gov/News/news_10_11_07.html

Sea Urchin Genome Project
A species Purple Sea Urchin(Strongylocentrotus purpuratus) is known to be closely related to human as both of the species is under the kingdom of deuterostome. For example, the membrane-bound receptor guanylate cyclase implicated in the important human disease, heat-stable enterotoxin dysentery, was first isolated from sea urchin sperm.

Source: http://www.hgsc.bcm.tmc.edu/projects/seaurchin/

Wednesday, November 21, 2007


Systems Biology

An Introduction

Systems biology is the study the study of interactions within biological systems. An example of a biological system is a plant cell, where the various organelles work together to maintain a healthy, functional cell. Instead of analyzing individual components of the organism, such as chloroplasts or the cell nucleus, system biologists focus on all the components and the interactions among them, all as part of one system.

Just like a computer, biological systems comprise of many individual components that serve specific functions. The Central Processor (CPU) for example performs extremely fast calculations. But with it alone, a computer cannot carry out the tasks that are required.


A computer system comprises of parts which interact. The interaction of these parts produces new properties and functions. Because these properties are the result of interactions between the parts, they can not be attributed to any single parts of the system. This makes systems irreducible. A system is unlikely to be fully understood by taking it apart and studying each part on its own. (We cannot understand an author's message by studying individual words; we cannot appreciate a forest by looking at individual trees.) To understand systems, and to be able to fully understand a system's emergent properties, systems need be studied as a whole.

Scientists today seek to understand the interactions occurring within biological systems. Unlike the computer, biological systems where not created by man. Therefore little knowledge is available about the functions and the behavior of the components within a cell. Just like a hardware developer has to understand the details within a computer system, scientists in the field of systems biology have to do the same. With this knowledge they are able to make changes the organism to ‘improve’ it.

The goals of systems biology does not stop with a complete understanding of biological system. Systems biology gives rise to prescriptive medicine – medicine that is specific for particular illnesses and could possibly hold the cure for incurable ailments like cancer.

I foresee many problems with the use of special medicines. Just because we are not God and our understanding of whole systems is limited, the wrong use of science could potentially leave us with a world full of zombies, as depicted in the movie ‘I am Legend’.

A Systems Biology Project

Halophilic Archaea Research – Institute for Systems Biology (ISB)


Scientists Woese and Fox demonstrated presence of a third domain of life besides eukaryotes and prokaryotes called the Archaea. The halophilic archaea are able to survive in hypersaline (extra salty) environments (e.g Dead Sea) as they are physiologically robust and are able to tune themselves appropriately to the environment through signal transduction and gene regulatory networks.

Researchers at the ISB have focused their study on two organisms - Halobacterium NRC-1 and Haloarcula marismortui. The study of these organisms offer an opportunity to understand the system level mechanisms of environmental response systems in cells. To carry out the study scientists have determined the complete genome sequences for both of the above organisms and developed an array of genome scale strategies tailored to analyzing their biology. Using these powerful tools they are applying systems approaches to obtain from halophiles the complete sets of metabolic and gene regulatory networks that together specify their behavior in the face of changing environmental conditions.

This study undertaken by the ISB could uncover new insights on the system cells adopt to be able to adapt themselves to different environments.


Sources

http://www.systemsbiology.org/Intro_to_ISB_and_Systems_Biology/Systems_Biology_--_the_21st_Century_Science

http://baliga.systemsbiology.net//

Monday, November 19, 2007

European Bioinformatics Institute (EBI)

The European Bioinformatics Institute (EBI) is an organisation that forms part of the European Molecular Biology Laboratory (EMBL). The EBI is a centre for research and services in bioinformatics. The Institute manages databases of biological data including nucleic acid, protein sequences and macromolecular structures.
EBI is a pioneer of novel and developmental bioinformatics research. They have specialist research groups providing an invaluable resource of biological data and utilities to aid the scientific community in the understanding of genomic and proteomic data.

Research at EBI:
Rolf Apweiler - Joint Team Leader Panda (Protein and nucleotide database) Group
Panda proteins - This part of the Panda group is in charge of data resources related to the protein sequence, domains and families database resources.


Paul Bertone - Group Leader
Bertone Group - The group applies bioinformatics and functional genomics to study early developmental pathways, with a particular focus on lineage commitment and differentiation of mammalian embryonic and neural stem cells.


Ewan Birney - Joint Team Leader Panda (Protein and nucleotide database) Group
Panda nucleotides - This part of the Panda group is in charge of the nucleotide sequence databases at the EBI that include ENSEMBL, EMBL-Bank and ASTD.


Alvis Brazma - Team Leader
Microarray Group - Gene expression data analysis, gene network and function inference from microarray data, functional genomics data integration, analysis and visualisation, biomedical informatics.


Nick Goldman - Group Leader
Goldman Group - Nick Goldman's group studies statistical methods for the analysis of DNA and amino acid sequences, to study evolution and to exploit evolutionary relationships to better understand the function of genome regions.
Wolfgang Huber - Group Leader
Huber Group - The group develops mathematical and statistical methods for the understanding of functional genomics data and the modeling of biological systems.


Sarah Hunter - Team Leader
InterPro Team - This team is responsible for the development and maintenance of the InterPro, Gene Ontology Annotation (GOA) and CluSTr projects. InterPro is an integrated documentation resource for protein families, domains and functional sites, and is used for small and large-scale functional classification of proteins.


Nicolas Le Novère - Group Leader
Computational Neurobiology Group - The interests of the group Computational Neurobiology revolve around signal transduction in neurons, ranging from the molecular structure of membrane proteins involved in neurotransmission to modelling signalling pathways.


Nick Luscombe - Group Leader
Luscombe Group - The group studies biological regulatory systems on a genomic scale: our current focus is to examine how the biology of an organism is shaped by regulation of gene expression. We investigate this at various levels of complexity, from single-celled bacteria and yeast, to mammals by integrating disparate sources of data.


Dietrich Rebholz-Schuhmann - Group Leader
Rebholz Group - Rebholz group studies extraction of facts from scientific literature, develops new language processing and statistical methods in conjunction with bioinformatics data resources.


Janet Thornton - EBI Director
Thornton Group - The group analyses the three dimensional structural basis of protein function and its evolution. We focus on enzyme catalysis, molecular recognition and drug design and the molecular basis of ageing.


What interests me is the Thornton Group, they are able to find out how biology works at molecular level, and able to research the evolution of enzyme function through structural analysis.
I generally feel that all the research done by all this groups are extremely useful to BioInformatics. From the consolidated resources of proteins, nucleotides, etc. to the prediction of the structural basis of proteins, etc.

For more information about the various groups above, please visit: http://www.ebi.ac.uk/research/

Proteomics

What is Proteomics?

Proteomics is the large-scale study of proteins, particularly their structures and function. Proteins are vital parts of living organisms, as they are the main components of the physiological pathways of cells.

Proteomics is often considered the next step in the study of biological systems, after genomics. It is much more complicated than genomics, mostly because while an organism's genome is rather constant, a proteome differs from cell to cell and constantly changes through its biochemical interactions with the genome and the environment.

Recent projects on Proteomics

  1. HUPO Proteome Biology of Stem Cells Initiative

  2. This project was to show that the potential of proteomics among stem call which is charged with characterizing all of the known human embryonic stem call lines, have helped rally enthusiasm. Becauses of these results, stem cell researchers are starting to realize that proteomics approaches could shed light on important processes that are specific to stem cell.

    Source: http://pubs.acs.org/subscribe/journals/jprobs/6/i09/html/0907proteomics.html


  3. Human Liver Proteome Project

  4. This project was to reveal the "solar systems" of the hu8man liver proteome, expression profiles, modification profiles, a protein linkage (protein-protein interaction) map, and a proteome localization map, and to define an ORFeome, physiome, and pathome. Setting up a management infrastructure, identified reference laboratories, confirmed standard operating procedures, initiated international research collaborations, and finally achieved the first set of expression profile data.

    Source : http://www.mcponline.org/cgi/content/abstract/4/12/1841


  5. Proteomic Analysis of the Human Serum Proteome

  6. This project was to establish the potential of the human serum proteome for screening and diagnosis in large populations using existing cutting-edge technologies for high-throughput protein analysis and biomarker discovery.

    Source : http://gammerman.com/grants.htm

GoogleMap

The following are some interesting BioInformatics related projects that incorporates some GoogleMap elements.

1. Google Metabolic Maps

This project is bought up by a PhD candidate in Biomedical Informatics named Duncan.

"Wouldn't it be great if Google applied some of that engineering expertise and agility to science and bioinformatics? Just imagine: we could have Google Metabolic Maps, a virtual globe of the cell for scientists everywhere..." quoted from his article.

It is mentioned that many scientists have been drawing metabolic maps for a long time. And these research hasn't been given a ground of proper integration. The chartings and detailed pathways are still undergoing massive reconstruction. If only there was a virtual representation of the metabolic pathways that looked more like GoogleEarth or GoogleMaps, than the old fashioned style maps. If we could expect more, the metabolic maps could be could be done on interactive tabletop computer and not some conventional scientific machines. PLUS, a bonus if its made into a open-source; where about anyone could use it for any purpose.

p.s This could be the next IN thing in for google and Bio-Info people (DEMO)?

reference: http://www.nodalpoint.org/2007/05/31/google_metabolic_maps

2. A medical representation for anatomy studies a.k.a Google Body

This project is not exactly called Google Body. It is not even a project actually.
Its more of a development to the existing scanning and graphic representation in the medical world.
source : http://www.nbc11.com/slideshow/news/14100749/detail.html

Let me summaries this. A group of medical experts wants to re-create Google Earth onto human anatomy studies. Google would be like helping doctors to map the human body into a virtual "thingy" giving people a closer look inside. It's really cool.

"Not only can they become more familiar and have an easier time understanding anatomy, they can explain to patients better and allow them to interact with their own images" said Stanford associate professor Paul Brown.

Instead of looking at the outside of the skull, scientists can look inside the skull, w/o cracking it open.
In future, docs may be able to use devices based on the technology to simulate a surgery or practice on virtual patients. Favro said.
Docs may also send patients the images to their iPhone in order to explain an upcoming surgery.

Google Body can eventually become an anatomy toold that provides a clear picture for patients that are interested in the medical procedures, some kind of a demo could be done.

Developing a system similar to Google Earth, expect for the body.
If only there is a system that can attach all the info to (a) model and layer it down so the depth of knowledge is like GoogleEarth.
Harnessing the power of imaging may someday help to each person's understanding of what's happening under their skin's suface.

Yay. That's it.

- Chua Fu Lin


Graph Theory

Introduction

In mathematics and computer science, graph theory is the study of graphs, mathematical structures used to model pairwise relations between objects from a certain collection. A "graph" in this context refers to a collection of vertices or 'nodes' and a collection of edges that connect pairs of vertices. A graph may be undirected, meaning that there is no distinction between the two vertices associated with each edge, or its edges may be directed from one vertex to another

Applications in biology

They are used mainly in :
- interactions of molecules in principle
- protein interactions: possible topology of complexes can be predicted
- studies on behaviour, e.g. interactions between members of a species
- taxonomical trees

Current projects

East Tennessee State University

Proteins as Graphs

This project aims to translate molecular descriptors to bio molecular descriptors so that they could establish a relationship between biological activities and chemical properties and structure.

for more info : http://www.etsu.edu/iqb/Math%20Bio%20proj.pdf

New York University

RNA Structure and Function

THis project aims to do theoretical modeling of RNA in vitro selection (an experimental technique for discovering novel RNAs), predicting RNA tertiary structures, and designing novel RNAs for biological applications

for more info :http://www.biomath.nyu.edu/index/webpage_rna_2007.html

Opinions

In my opinion this technique is something that could potentially help my group's project as it is largely related to predictions and forcasts of events using graphs. It's applications are also useful for doing designs and stuff like that. I would recommend this technique for my fellow classmates who are also working on their prediction features(provided that they could understand this properly). Although this is a good technique, there also a few problems in this technique(Like enumeration, graph coloring, etc). If still dun understand can click here =)

Fred Hutchinson Cancer Research Center

Fred Hutchinson Cancer Research Center

Lee Hartwell is the President and Director of Fred Hutchinson Cancer Research Center. It is an institution of world-renowned depth and variety for the three Nobel laureate. More than 2,300 scientists and staff conduct research to understand, treat and prevent cancer, HIV/AIDS and other life-threatening diseases.

Linda Buck - Nobel Prize in physiology or medicine (2004)
Her discoveries of odorant receptors and the organization of the olfactory system

Lee Hartwell - Nobel Prize in physiology or medicine (2001)
His discoveries on regulation of the cell cycle

E. Donnall Thomas - Nobel Prize in physiology or medicine (1990)
His pioneering work on bone marrow transplantation


Breast cancer

Breast cancer is the second leading malpractice-related condition with most lawsuits arising out of misdiagnosis and delayed treatment. One problem is that a mammogram may be negative, even for women with a breast lump, but a negative mammogram does not definitively rule out breast cancer.

According to the American Cancer Society (ACS), they have found out that women who had no children or had their first child after 30 years old have slightly higher risk of getting breast cancer. Having multiple pregnancies or pregnant at an early age seems to reduce the breast cancer risk.

Researchers at the University of Washington and Fred Hutchinson Cancer Research Center have identified a source that fetal cells taking up residence in the mother before birth. Fetal cells in women may confer immune protection and promote cell repair, such cells also may be harbingers of some autoimmune diseases.

To prove the theory, researchers examined the blood of 82 women post-pregnancy, 35 of whom had had breast cancer. They looked for male DNA in the blood, presuming it was present due to a prior pregnancy. The rationale for this is that it is a relatively definitive matter to detect the male Y chromosome amid the mother’s native which is obviously female cells within a blood sample.

Sunday, November 18, 2007

Neural Network

What is it?

While von Neumann machines are based on the processing/memory abstraction of human information processing, neural networks are based on the parallel architecture of animal brains. Also known as artificial neural network, which are composed of artificial neurons or nodes.

Biological neural networks are made up of real biological neurons that are connected or functionally-related in the peripheral nervous system or the central nervous system. Artificial neural networks are made up of interconnecting artificial neurons (usually simplified neurons) which may share some properties of biological neural networks. Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving traditional artificial intelligence tasks without necessarily attempting to model a real biological system.

Neural networks are a form of multiprocessor computer system, with
  • simple processing elements
  • a high degree of interconnection
  • simple scalar messages
  • adaptive interaction between elements
Neural networks are widely used in:
In process control:
Newcastle University Chemical Engineering Department is working with industrial partners (such as Zeneca and BP) in this area.
In monitoring:
networks have been used to monitor
  • the state of aircraft engines. By monitoring vibration levels and sound, early warning of engine problems can be given.
  • British Rail have also been testing a similar application monitoring diesel engines.

A simple neural network architecture illustrated below:





Neural Network algorithms

GANN - Genetic Algorithm Neural Network

The project serves to identify regulatory region found in gene and protein in order to find search for important DNA sequence that contains structural properties and specific protein-binding sites. So that we can predict the functions and models.

Gene Ontology

Gene Ontology is a very big part in gene & protein study. Gene Ontology actually has 3 parts in it.
  1. Describe the function of the gene
  2. Role in multi-step biological processes
  3. Localization in cellular component

It is believe that every gene has a GO number. The GO number will describe the function of the gene.

It also describe the roles in multi-step biological processes. As i feel that it is going out of the main point, i shall not explain this point in detail.

It also can tell us where this gene is located in the call. Example a GO number 0015629 is actaully an 'actin cytoskeleton' which is in the cytoplasm of the call.

Here is a link to Gene Ontology Consortium which provides access to the ontologies, software tools, annotated gene product lists, and reference documents describing the GO and its uses.

Many thanks to wikipedia and www.geneontology.org/ for the information.

PS : This is just a blog posting, if there is anything wrong in the information in this post, please do not sue me, just think of it as my personal believe. :p

GIS Software and capability

What is Geographical Information System (GIS)?

It’s actually a system for capturing, storing, analyzing and displaying all forms of geographically referenced information. In Bioinformatics, GIS technology can be used for scientific investigations, environmental impact assessment, for example, allows planners to calculate response time in case of natural disasters or the geographical of cancer and other disease in human, animal and plant populations.

Examples of GIS in Bioinformatics

ETI Bioinformatics
Currently they are participating in 2 GIS projects funded by NWO, the Dutch National Science Foundation:

The Impact of Global Change on the Biological Diversity of the North Sea. Do invading species change the composition and function of the North Sea ecosystem?

Climate change and Indonesian coral reef biotas

For these projects, ETI is building databases, programming queries and preparing maps and map layers

SARS in Hong Kong
Researcher uses cartographic and geostatistical methods in analyzing the patterns of disease spread during the 2003 severe acute respiratory syndrome (SARS) outbreak in Hong Kong using geographic information system (GIS) technology. They analyzed an integrated database that contained clinical and personal details on all 1,755 patients confirmed to have SARS from 15 February to 22 June 2003.




Cluster analysis. A series of 12 kernel maps based on date of symptom onset and accounting for a 5-day incubation period of SARS. Each kernel map shows the density of SARS patients adjusted for underlying population density (i.e., SARS infection rate per 1,000 population) on a prototypical day over 16 weeks, with darker zones emphasizing disease hot spots.


Conclusion

GIS can offer quantitative and statistical measures along with visualization tools to examine patterns of disease spread with respect to disease clusters. And so to help/assist researchers to do predictions on the spreading of the disease and monitor outbreaks. But there are still limitations to the GIS technique in outbreak investigation. Howe GM. 1963. National Atlas of Disease Mortality in the United Kingdom. London:T. Nelson. argued that mapping of diseases tended to expose the “where” but not “why there” of the outbreak.

Swiss Institute of Bioinformatics (SIB)

Swiss Institute of Bioinformatics (SIB)

SIB is an academic not-for-profit foundation established on March 30, 1998 whose mission is to promote research, the development of databanks and computer technologies, teaching and service activities in the field of bioinformatics, in Switzerland with international collaborations.

The Institute has three roles. They are: Teaching, Service and research.

- It maintains databases of SWISS-PROT, PROSITE, EPD, TrEST, TrGEN(predicted proteins) and etc.

- It create softwares of Deep view, Melanie, ESTScan, pftools and etc.

- It provides services and helpdesk for request tracking, publications and serveral web servers.

SIB also provide education for Bachelors, Masters and PhD courses. They also have courses at other countries.

- SIB also do research on new and improved algorithms, new technology, new tools and new databases. Research are done mostly on sequence and expression analysis, 3D structure, and proteomics, but also Systems Biology and Evolution.

The recent project happening at SIB is the Vital-IT.



Vital-IT

Vital-IT is an innovative life science informatics initiative providing computational resources, consultancy and training to connect fundamental and applied research. It represents an opportunity for European researchers to make use, free of charge, of Vital-IT's Integrated Computational Genomics Resources for projects in any of the life science field.

This project, through funding provided by the EU 6th framework Programme, started in 2006 and will continue until 2010. It consists of three module. They are: Training, Remote access and Visiting Developer.

Training
For Training, users, in particular those with limited technical experiences, can attend course on the technical aspects of the infrastructure, to learn how to take full advantage of it. The training mainly target new users from european countries. It is open to graduate students, post doctoral fellows, and more senior researchers. Part of the travel and living expenses can be provided for people attending the course as well.


Remote Access
Remote access to the Vital-IT infrastructure and computational genomics environment is provided via a new user-oriented Web interface. Successful applicants will be provided with a user account on Vital-IT and adequate CPU and disk storage quota to carry out the proposed project. This programme is primarily intended for projects that depend on database and software resources that were developed at Vital-IT and cannot easily be ported to another HPC centre. Requests for remote access to Vital-IT will also be evaluated by the review panel mentioned above. Remote users of the Vital-IT platform should prepare and submit their jobs according to detailed guidelines similar to those applied to existing users. These jobs should be able to run without requiring further assistance from Vital-IT personnel, and will partly be selected according to this criterion.


Visiting Developer
Visiting developers may stay for a period from one week to two months at Vital-IT, with a likely average of one month. Their activities may include the development of new software for HPC applications in life-science, parallelization and optimiziation of existing software for the specific hardware architecture of Vital-IT, and large data analysis projects making use of the rich database collection offered by this facility. They will be provided with office space, free access to all hardware and software resources required by the project, and technical assistance from Vital-IT staff. Visiting developers will be selected primarily on the basis of a project proposal that will be evaluated by a review panel including external experts in high performance computing, bioinformatics and genomics, including representatives from industry.


Vital-IT can be for institutions, companies or even individuals and the databases can either be local or public.

Link: http://www.vital-it.ch/


Another project of SIB is the "MyHits" which is dedicated to the annotation of protein sequences and to the analysis of their domains and signatures.


"MyHits provides full access to

  • standard bioinformatics programs (e.g. PSI-BLAST, ClustalW, T-Coffee, Jalview)
  • a large number of protein sequence databases including:
    - standard databases, such as SwissProt, TrEMBL, etc.
    - locally developed databases, such as TrEST, TrGEN, trome, splice variants, etc.
  • databases of protein motifs (Prosite, Interpro)
  • a precomputed list of matches (‘hits’) between the sequence and motif databases. "

all these information can be found at: http://www.isb-sib.ch/projects/intro.htm



Saturday, November 17, 2007

Single Nucleotide Polymorphism

What is SNPs?
Single nucleotide polymorphisms or SNPs (pronounced "snips") are DNA sequence variations that occur when a single nucleotide (A,T,C,or G) in the genome sequence is altered. For a variation to be considered a SNP, it must occur in at least 1% of the population. SNPs, which make up about 90% of all human genetic variation, occur every 100 to 300 bases along the 3-billion-base human genome. Two of every three SNPs involve the replacement of cytosine (C) with thymine (T). SNPs can occur in both coding (gene) and noncoding regions of the genome. Many SNPs have no effect on cell function, but scientists believe others could predispose people to disease or influence their response to a drug.


The International HapMap Project
The International HapMap Project is a multi-country effort to identify and catalog genetic similarities and differences in human beings. Using the information in the HapMap, researchers will be able to find genes that affect health, disease, and individual responses to medications and environmental factors. The Project is a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States.

The goal of the International HapMap Project is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. By making this information freely available, the Project will help biomedical researchers find genes involved in disease and responses to therapeutic drugs.

For more information about the HapMap Project, visit http://www.hapmap.org/thehapmap.html.en and http://www.hapmap.org/whatishapmap.html


Latest Deveplopment In The International HapMap Project
On October 27th at the American Society of Human Genetics 2005 Annual Meeting, the Wellcome Trust, the National Institutes of Health and the National Human Genome Research Institute offered a two-hour tutorial on effective HapMap usage. The session included an introduction to the HapMap, use of the HapMap for association studies, tag SNP selection, improving analyses using chips with pre-selected SNPs and a guide to the HapMap Web pages.

For more information, visit http://www.genome.gov/17015048


My Thoughts
After reading the about the International Hap Map Project, I'm impressed by the efforts that the scientists from around the world put in to identify the genetic variants that occur in human beings, which differentiates the appearance of human beings, the likelihood of suffering from disease, as well as the responses to substances that we encounter everyday in our life.

Mashups

Mashup Introductions

From the time since internet was introduced to the world till now, developers had created many web APIs for their application. One of such latest example is the Google map.

In today's web development, the developers are getting smart. They uses 3rd party data source and other APIs, merging them to create a unique application. So, in generic, Mashups are actually an exciting genre of interactive Web applications that draw upon content retrieved from external data sources to create entirely new and innovative services
. It is a new and innovative web application.

The following link provides a video on the explanation of mashups:
http://news.zdnet.com/2422-13569_22-152729.html


Mashup Examples

  • Mapping Mashups
At today's information technology sector, people are storing huge amount of information on the things and activities occurring everyday. But, these informations, often containing the location datas are not presented in the most interactive way. It was through the introduction of the Google maps APIs, that these informations can be presented in a more graphically and interactive way. Web developers started to use the Google maps APIs to present their information.

Existing websites that do mapping mashups: FrozenBear.com
  • Video and photos Mashups
There are many social networking and photo hosting websites that is popular among the teenagers today. As we all know, there are mashups available that is able to mash up the photos and music to form a unique picture slide show (www.slide.com). For example, www.friendster.com, allows user to insert videos, photos, or even musics into their homepage. These videos, photos and musics are often uploaded at other hosting websites, like photobucket.com.


Conclusion

Mashups are truly an interesting and very innovative way of developing a web application. I believe that Mashups are getting popular and many developers will start doing mashups.

Friday, November 16, 2007

casting a spell on DNA - DNA origami?

Just saw this video and must share with all of you... so cool!
Have you ever seen a DNA smiley face or a DNA global map???

ICAT

Overview

ICAT reagents were developed by Professor Aebersold at the University of Washington. Researchers can use ICAT to compare relative protein abundance between two samples, often this may be between healthy and diseased tissue, but may be one of any number of comparisons. The advantage of this technology over gel electrophoresis is its speed and its ease of automation.

Recent Project using ICAT

Here some some project that uses ICAT:

  1. Complementary analysis of the Mycobacterium tuberculosis proteome by two dimensional electrophoresis and isotope coded affinity tag technology
    This project was to show that using different type of technologies to the same sample would provide different type of result. And to help allivate the limitation of two-Dimensional electrophoresis and mass spectrometric identifcation method, ICAT was used.

    source: http://www.proteomecenter.org/PDFs/Schmidt.Complementary_analysis.MCP.o3.pdf

  2. Isotope coded affinity tag (ICAT)-based protein profiling
    The ICAT reagent was designed to affinity isolate and quantify via the use of a stable isotope the relative concentrations of cysteine-containing tryptic peptides obtained from digests of control versus experimental samples.

    Here are some project that using the ICAT based protein profiling:
    http://keck.med.yale.edu/prochem/icat/references.htm

    source:
    http://keck.med.yale.edu/prochem/icat/

My thoughts:

ICAT is often used for protein analysis as it can be used to Quantitative profiling which can help to prostate cancer cells which in the future, may help to discover a definate cure for cancer which would be a great breakthrough in the medical realm.

other reference sources:
http://www.chemsoc.org/ExemplarChem/entries/2002/proteomics/icat.htm
http://www3.interscience.wiley.com/cgi-bin/abstract/107614589/ABSTRACT

OMIM

Brief Description

Online Mendelian Inheritance in Man (OMIM) is a continuously updated catalog of human genes and genetic disorders.
It focuses on inherited, or heritable, genetic diseases and summarize the trait or disorder of the genes.
The OMIM record also contains the official symbol for each gene, the key mutations in genes which cause diseases, the functions of the genes and the proteins that they encode, it describes the genetic conditions and how the genes are inherited.
It is also considered to be a phenotypic companion to the human genome project.




















Source:
http://www.ncbi.nlm.nih.gov/Omim/omimfaq.html#db_descr
http://www.ornl.gov/sci/techresources/Human_Genome/posters/chromosome/geneguide.shtml http://www.ncbi.nlm.nih.gov/entrez/dispomim.cgi?id=605558


System Using OMIM

MutaGeneSys
A system that uses genome-wide genotype data to predict diseases.
This system is able to detect individuals that is prone to disorders in the OMIM among people who participate in the genome studies.

The aim of the project is to create a flexible, extensible and efficient framework to store and query both direct and indirect association data and lastly

To provide a set of tools to import and maintain data, and to use these tools to populate the database with direct and indirect association data that is available currently.



















genotype data


Source:
http://www.cs.columbia.edu/~jds1/MutaGeneSys/MutaGeneSys.pdf
http://www.umiacs.umd.edu/~nedwards/documents/GW_MedInfo_Data.pdf
http://www.cs.columbia.edu/~jds1/MutaGeneSys/MutaGeneSysPoster.pdf


My Comment
I was astonished by what this system is all about and eager to see the personalized medicine based on each individual genetic information in the near future.

ants and cockroaches : swarm theory and decision making behavoir

I am always fascinated by the insert kingdoms... not that I will keep them as pets, in fact, I am scared of most of them =P. However I always find them rather intelligent.

Recently I came across this article from National Geographic and I couldn't help wondering what else have we not discovered from this tiny sized kingdom to make our world more efficient.

Quoted:
"Ants aren't smart, ant colonies are." A colony can solve problems unthinkable for individual ants, such as finding the shortest path to the best food source, allocating workers to different tasks, or defending a territory from neighbors. A individual, ants might be tiny dummies, but as colonies they respond quickly and effectively to their environment. They do it with something called swarm intelligence.

Hmmm... sounded very similar to social computing and all the Web 2.0 'technologies' (well, not exactly the same but hey, that's how del.icio.us works!). Just a note, ants have been doing so for 140 million years.

Another interesting one... from Science - a theoretical biologist, Halloy, has successfully created robot cockroaches to mingle with the real 'peers' and even persuaded many of their insect 'peers' to hide in an unconventional place. With this, scientists speculate that this can be developed into a 'powerful' pest control. Well, as I have said from the beginning, the insert kingdoms are intelligent so we will see if it became reality. Whichever the case, it is interesting to know how inserts think...

Institute for System Biology

Overview of Institute for System Biology(ISB)

System Biology is the study of the interactions between genes, proteins and biochemical reactions which give rise to life. System Biologist focuses more on all the components and the interactions among them instead of analyzing individual components. These interactions are the main reasons for the form and function in an organism.

System Biology came from the result of the genetics “catalog” provided by the Human Genome project. Another reason is the increasing knowledge of how genes and their resulting proteins give rise to biological form and function. The internet aided the study of System Biology with ease as it allows researchers to store and distribute massive amounts of information.

Overview of HUMAN PROTEOME FOLDING PROJECT

The Human Proteome Folding Project, which use the power of computer to predict the shaping of Human proteins for which are still new to researcher. The researchers hope to learn something from this shape as the shape of the proteins show how they function inside our body.

The project starts with human proteins from the human genome. We will fold the proteins that have no known structure. By using the Rosetta structure prediction, we can predict the structures (fold) of these unknown proteins. Rosetta uses a scoring method to search through huge numbers of possible structures and choose the best among all. Then we will cross-match the predicted structures with existing protein structures using the X-ray crystallography and NMR-spectroscopy to see if the prediction has been seen before. If a match is found, the researcher will use various methods to get the function of these unknown proteins.

My Conclusion

Systems biology is advancing in a fast pace; we are currently at the turning point in further understanding of what the future holds for biology and human medicine. The ISB is the pioneer of this new opportunity.

Source:
http://www.systemsbiology.org/
http://www.systemsbiology.org/Technology/Data_Visualization_and_Analysis/Human_Proteome_Folding_Project

Mascot



Mascot is a powerful search engine which uses mass spectrometry data to identify proteins from primary sequence databases.

While a number of similar programs available, Mascot is unique in that it integrates all of the proven methods of searching.

These different search methods can be categorised as follows:


  • Peptide Mass Fingerprint in which the only experimental data are peptide mass values


  • Sequence Query in which peptide mass data are combined with amino acid sequence and composition information. A super-set of a sequence tag query


  • MS/MS Ion Search using uninterpreted MS/MS data from one or more peptides


Project which uses mascot - HUPO Brain Proteome Project Pilot Studies

Purpose of this project:
Proteomics studies driven by large consortia often led to heterogeneous data due to different strategies, techniques and equipment. Nevertheless, results have to be implemented into one database to assure a common, standardized interpretation. This is why this pilot study is initiated as high degree of standardization are extremely important in order to obtain reliable results.

Objective of this project:
To come out a solution to data standardization by analyzing data by four different search engine (Mascot,Sequest,PFF-Solver and Phenyx).

For more information, visit
http://www.medizinisches-proteom-center.de/lehre/poster/HUPO/Christian_Stephan_HUPO_2005.pdf

For information on mass spectrometry, visit
http://bioisit.blogspot.com/2007/11/mass-spectrometry.html

References:
http://www.matrixscience.com/
http://www.hbpp.org/

Thursday, November 15, 2007

Gel Electrophoresis

What is Gel Electrophoresis?Learn about me here in a FUN way!

Gel Electrophoresis is a method to seperate DNAs or proteins by an electrical charge through a gel, thus the name.

DNA is a negatively charged molecule, thus it is moved by electric current from the negative end to the positive end. DNA moves in agarose that works like a sieve. The DNA is then seperated according to their size, the smaller or shorter fragments will move faster towards the postive charged end.

A video on how agarose gel electrophoresis is performed:


Proteins are commonly seperated by using polyacrylamide gel electrophoresis (PAGE) to characterize individual proteins in a complex sample or to examine multiple proteins within a single sample.
For more info on Polyacrylamide Gel Electrophoresis

The electrophoresis procedure can be a starting point for future additional identification and isolation of DNA fragments.

An example of 1D gel electrophoresis:

PS: Click the image if it's not moving.
An example of 2D gel electrophoresis:


Current projects:
From the University of Bristol, Proteomics Facility.

One example is Protein Expression Analysis.
Objective: To be able to elucidate proteins which may be downstream of galanin and therefore play important trophic roles following nerve injury.
More!

2D Gel Matching
Department of Computer Science, Free University of Berlin

Basically, this project is to identify protein information on a gel image and compare it with another gel image which already has the information on the identity of the proteins is performed.

Visit here for more information.

Mass Spectrometry

Brief Description:

The general meaning for Mass Spectrometry is an analytical tool used for measuring the molecular mass of a sample. In Bioinformatics, Mass spectrometry plays an important role in Proteomics and will still be a key technology in a foreseen future. "Structural information can be generated using certain types of mass spectrometers, usually those with multiple analysers which are known as tandem mass spectrometers. This is achieved by fragmenting the sample inside the instrument and analysing the products generated." quoted from Introduction to MS link source below. Mass Spectrometry data are used in Mascot.



Ongoing Project:

Molecular Characterization of the Lipidome by Mass Spectrometry.

Aim: Designed for high-throughput oriented lipid analysis by integrating robotic sampling, lipid species-specific mass analysis and software-assisted deconvolution of spectral data.

Improvements in mass spectrometric technology have proved highly efficient for the characterization and quantification of molecular lipid species in total lipid extracts. This methodology is less timeconsuming compared to that of conventional methods(e.g highperformance liquid chromatography, thin-layer chromatography (TLC), and gas chromatography) and requires less sample amount because of its higher sensitivity and specificity.

Uses 2 type of mass spectrometry

  • electrospray ionization mass spectrometry
  • hybrid quadrupole time-of-flight (QqTOF) mass spectrometer

The whole of Ongoing project is quoted from the Project Information link source below.

My Thoughts:
With these new researches for Mass Spectrometry, the future generation will have easier time to analyze proteomics and other small molecules not only in Bioinformatics, but also other related fields. There is even a blog for Mass Spectrometry with related videos and a link which shows mass spectrometer protecting soldier in Iraq.

Source:
Introduction to MS
MPI.CBG research
Project Information

Text Mining Technique

Text mining or text data mining is the process of extracting useful information from text.

Through techniques of dividing patterns and trends high quality information can be obtained from input text.

Andrew Clegg of the Shepherd Group is developing methods of extracting bioinformatics data resources (E.g. molecular biology journal articles) using text mining technique. As there is some challenge in the area of recognition and identification of gene and protein names, a system called BioNERD is developed and integrated with the system.

Another problem he has is with natural language, there is many different ways of expressing one something.

His Solution is : "parsing the sentence with a phrase-structure parser, mapping the resulting syntax tree into a dependency graph where each node is a word and each arc a grammatical relation (see image), and identifying subgraphs covering two or more entities which are characteristic of genuine relationships." - quoted from his site.

In layman terms , using a technique to split the sentence into smaller bits and determining their relationship with each other and thus drawing out information.

Next stop, we have a research project led by Prof. Dr.Udo Hahn called BOOTStrep short for Bootstrapping Of Ontologies and Terminologies STrategic REsearch Project. Now this projects aims to catalog all existing biological terminological resources into a standardized library which could further add on to its database by using text mining tools and technique to analyze biological documents and acquiring new information from them.

Once completed BOOTStrep will be available for public use and will be available in a number of languages.Further more the system itself will be able to validate its data automatically for accuracy and originality.

With both of these technologies in place we will be able to extract valuable information from journals and other documents without actually reading them thus saving us precious time to do our coding and other research. With the ever growing database of biological information we really need these services to help us keep track of what biological knowledge we have accumulated over time, else much of these discovery could be over looked due to the lack of human effort of actively seeking the discoveries that others have newly found.

sources : Andrew B. Clegg projects . BOOTStrep project website

Friday, November 9, 2007

IT3121 2007S2 research and blog

Dear IT3121 2007S2 students,

I hope by now you all have started reading or researching on the term that is assigned to you (see the following table). The main purpose is to study the latest research or interesting projects (in the field of bioinformatics) and give your comments.



You can directly post an entry to this blog or link to your own website or blog. You can use animations, graphs, pictures or even quotes from famous researchers, just remember to include the citation and the necessary acknowledgement.

Looking forward to reading your blog and research =)

Friday, October 19, 2007

common sense of our genomes?

another interesting article from Nature - common sense for our genomes.
The following are some quotes:

"A personal DNA sequence is not yet practically useful. But it could be, if we had the right resources available to interpret genomes"

"It remains to be seen whether we will learn anything more important from our genomes than the need to use sunscreen, eat better and exercise more"

Rosetta@home is shaping protein structures

Just read this news from Nature... The shape of protein structures to come

David Baker from University of Washington is reporting result in modelling a protein using just the amino acids. This is done using Rosetta@home program which taps on computing power of 150,000 computers. Cool... modelling may soon see light.

Tuesday, October 16, 2007

Tutorials in bioinformatics

found this website with lots of goodies.

The online lectures on Bioinformatics from Max-Planck Society is a good source for any one who are new and wanted to know more about bioinformatics.

One other very resourceful site is the Canadian Bioinformatics Workshops.

A leap forward for SNP studies

... and knowing what makes you, YOU!

NIH has made available a genomic database available free to researchers worldwide. What's exciting is the inclusion of clinical and phenotype data alongside genetic information of subjects. An increasing number of studies shared through this project opens up new opportunities for Bioinformaticians to analyze and single out genes responsible for diseased phenotypes; thus possibly create prediction models that would allow scientists and clinicians to improve diagnosis and prognosis of serious illnesses.

[Source]

Reference: dbGaP

Thursday, October 11, 2007

Web 2.0 and bioinformatics

Just attended a solid session on Mashup and heard about the following:
Google Mashup Editor; Yahoo Pipes; IBM QEDWiki; Microsoft Popfly.

Was wondering if there is any mashup application in bioinformatics and found the following:
pipe dreams
A pipe to search bioinformatics journals.

I am sure there are a lot more out there. Care to share?

Wednesday, October 10, 2007

The world is beautiful

some sites that John Larkin recommended:
Singapore:
yesterday.sg
the annotated budak
Australia:
the adventurer's club

favourite life science blogs

saw this article..
http://www.the-scientist.com/news/home/53596/

Nanyang Polytechnic


View Larger Map

They are the best...


My favourite characters


Web 2.0 workshop

Speaker John Larkin's website (good source of Web 2.0 materials):
http://www.larkin.net.au/020_web20infoshare.html

Tools for searching blogs:
http://www.technorati.com/
http://www.bloglines.com/

Tools for searching web (and results grouped in clusters or categories):
Overwhelmed by the information out there???... you may want to try the following:
http://www.vivisimo.com/
http://www.kartoo.com/

A tip I got from the speaker on starting a blog on a specific topic: Go to http://en.wikipedia.org/ to find the relevant links below and find some suitable sites

To have a blog on site (which maybe easier to manage if you want a more secured site), one freeware to use:
http://wordpress.com/

Bringing everything together (and embedded in the blog) and getting the latest update:
http://jaiku.com/

Discussion online real-time with concept of followers:
http://www.twitter.com/

Monday, September 24, 2007

interesting website - IBM genographic project :