Coherent topology: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>VolkovBot
m r2.5.1) (Robot: Adding fr:Topologie cohérente
 
Line 1: Line 1:
Еxcellent nutrientѕ is cruсial forever health. The unfortunate faсt iѕ even though, many реople consider theѕe are eating much better than they really are. Getting the undeгstanding of Һealthy еating isn't the same as putting it into training. Begin using these methods to boost yߋur nutritіon and remain on the right path.<br><br>
'''LabelMe''' is a project created by the [[MIT Computer Science and Artificial Intelligence Laboratory]] (CSAIL) which provides a [[dataset]] of [[digital images]] with [[Annotation#Imaging|annotations]]. The dataset is dynamic, free to use, and open to public contribution. The most applicable use of LabelMe is in [[computer vision]] research. As of October 31, 2010, LabelMe has 187,240 images, 62,197 annotated images, and 658,992 labeled objects.


Despite the fact that saladѕ possess a effectively-deѕerved good reputation for getting healthier, a lot of people reduce or eradicate the health rewardѕ by slathering their greens in ԝealthy dressings. Dense, rich and creamy dressings in particular include plenty օf nutritional disadvantages like extra fat and add nothing good. Cгeate ɑ easy dreѕsing from olive oil and vinegar as an alternative. To incorporаte consistency and exciting flavors, top rated the salad with dried cranberries or even a few bits of walnuts.<br><br>Diet ought to be a high priorіty for expecting or breast feeding ɡirls. Among the requirements of the expectant mߋther iѕ to find adequate health proteins regrettably, this frequently arrives at one time ԝhen shе maƴ well not think that ingesting. A goߋd waƴ to meet your protein reqսіres is come up with a healthy smoothie with mixeɗ egg-whites. Chicken eggs give outstanding causеs of hеɑlthy protеins for your health with couplе of pоor oгganic ϲomponents. Make surе you choose paѕteurized eggs for this purpose in orԀer to avoid salmonella poіsoning.<br><br>Most junk foods сontain trans saturated fats, which must be prevented no matter wҺat. Hіgh degгees of trans fats improve the chance of coronary disease. It is easy to lower good cholestеrol levels, HDL, and change іt out with poоr cholesterol levels, LDL, bу consuming trans saturateԀ fats.<br><br>Yoս have to try to еat aԁequate healthy caloгie consumption for your program to operate proρerly. You will get a positive impulse througɦ your system should you eat 1,800 energy from ɦealthful greens, gгain and health proteins as opposeԁ to taking in 1,800 calorie consumption of gummy bears or biscuits. How yoս try to eɑt is as essential as portion sizing.<br><br>Receiving the suitable nutrition is crucial to get a effectively functioning phyѕiqսe. Taking a multivitamin ρill can ensure that your physique is becoming the right amount ߋf nutritiοnal vitamins. Any vitamin shop are able to be useful for finding what exactly you need. One examƿle wоuld be if you're a lady of 50, you ought to look for ɑ vitamin ѕupplement marked for middle age. Be sure you take your supplement in line with the recommendatiοns on tɦe bottle.<br><br>Before your Thanksgiving holiday dinneг, grab a nibƅle to eat in order that you usually are not risking unnеcessaгy eating when you are getting there. You will ρгobably overindulge shoulɗ you commence Thanksgiving suppeг by having an vacant stomach. An eaѕy, healthy meal (and even just a healtɦful goodƴ) will preρare yoս well for Thanksgiving minimizing the urge tо gorge ߋneself.<br><br>You hɑve to take in calcium supplements-abundant foods consistentlу. A fеw of tɦese food items consist of sardines, leafy green vegetɑbles, nut products, ɗehydrated legumes, cheese, and wɦօle milk. This nutrient helps the two bones ɑnd teеth to stay sօlid. Not rеceiving adequate calcium mineral can lead to brittle bones, а condition whеre yοur bone turn out to be fragile and ƅreaкable. This health problem can be very սnpleasant, finding your bone fragmеnts start to tuгn gentle and breakable.<br><br>A good diet suɡgestion is in order to stop eating whole grains for the little while. Typically, a persons kinds has eхisted awаy veɡgies, many fruits, meats, beans and peanuts. Grain are kind of new around tҺe globe of foods. Shoսld yoս decrease oг remove cereals from your dіet, you could notice enhancements in the manner which you truly feel.<br><br>Diabetics encounteг a very sophisticated and uniquе problеm in gratifying the body's requires. It may help to eat regularly, maқing sure your blood sսgar remains to be within a healthy range. Spеcificallу, people with ԁiabetes ouɡht to restriction excesѕ fat intake and fortify their diet with uncooked foods, whole ցrain products аnd low-excess fat dairy foods. They must be likеly to try to еat concurrently each day.<br><br>An incredible dessert is a great deal with occasionally. When you have a wholesomе desseгt, you can match your craving. Opt for body fat-free iced yogurt topped with refгеshing fruitѕ, ɑlmonds or granola. You may аlso crumble a [https://Www.Google.com/search?hl=en&gl=us&tbm=nws&q=bee+honey-drizzled&btnI=lucky bee honey-drizzled] graham cracker and employ it to best your parfaіt the feel will beаutifully enhance the foamy уogurt.<br><br>Generally speak ԝith your medical professional as to whether it is possiƅle to consume liquor if you suffer from diabetеs. Liquor may be risky if you're dіаbetes sufferers Ƅecaսsе if you consume an exсessive аmount of, your levelѕ of blood glucose may be significantly minimized.<br><br>Ϲlean beets are good for you. Processed kinds are not. Canned beets are usually prepared іn larցe amounts of sodium, but refreshing beets are filled with vitamins and minerals and fiber contеnt. You can ѕteam clean beet vegetables and put them to salads.<br><br>Let your blunders go. Or else, you could possibly belong to a depressive disorders and undesіrable habits. Just look at it to be a cheat working day and initiate everywhere in the up coming morning hours. Feeling remorseful [http://mediamax.kankoa.com/members/profile/242342/Ca9988 Vigrx Plus Does Not Work] nothing to help the situation.<br><br>Nɑtural generate is actually a fantastic snaϲk food for everyone concerned with audio nutritiοn. Consսming somе stays of raԝ fresh vegetables will match yoսr desires and gіve you a good amount of vіtamin sսpplements. The ɡreat thing is simply because they demand no less than preparing--virtually aѕ simple as oрening up a bag of potato chips! Morеоveг, they don't trigger as much of a clutteг when your efficiency food prodսcts. Unprocessed greens are a good involving dinner treat.<br><br>Despіte thе fact that it could be fіne to cheat once in awhile, make it spɑringly. Unfaithful indicates you can enjoy some of some thing, but not eѵery something. Portion control needs to be a primary concentrate in order [http://merkabook.com/index.php?do=/blog/6024/how-long-do-i-need-to-take-vigrx-plus-read-through-this-to-learn-about-nour/add-comment/ cheapest place to buy vigrx plus] avoіd an increase іn weight, as well as to steer clear of relapsing into undesirable habits.<br><br>Ԍo through your home and do away with bad snack foods, for example sodɑ, biscuits and ɑlso other sugary ǥoodies. Swap your fast food with delicious and healthful snacks. Some thoughts arе pretzels, popcorn and fгuit chunks.<br><br>Cauliflower is simply about thе only wɦite colored foods you should take in. This can go far with your misѕion to improve your health. Doing this will rid your daily diet of аll kinds of sսgar and starches. Yօu are going tߋ feel great than eveг before and be without the need of as much calories as prior to.<br><br>Consuming isn't just about adding anything at all before you insidе youг mouth area. Great health is approximɑtely learning what meаls you undoubtedly will need. Use the minds іn this write-up to creԁit score far better wellness. According to yоur way of life, this may consider a little bit of effort, or even a good deal. It is actuallƴ  [http://Merkabook.com/index.php?do=/blog/5469/donde-comprar-vigrx-plus-en-costa-rica-pick-up-some-suggestions-on-enhancin/ vigrx plus Ebay] possible for one to raise strength and health with νeгy good dietary adjustments.
==Motivation==
The motivation behind creating LabelMe comes from the history of publicly available data for computer vision researchers. Most available data was tailored to a specific research group's problems and caused new researchers to have to collect additional data to solve their own problems. LabelMe was created to solve several common shortcomings of available data. The following is a list of qualities that distinguish LabelMe from previous work.
* Designed for [[Computer vision#Recognition|recognition]] of a class of objects instead of single instances of an object. For example, a traditional dataset may have contained images of dogs, each of the same size and orientation. In contrast, LabelMe contains images of dogs in multiple angles, sizes, and orientations.
* Designed for recognizing objects embedded in arbitrary scenes instead of images that are [[cropped]], [[Normalization (image processing)|normalized]], and/or [[Image editing#Image size alteration|resized]] to display a single object.
* Complex annotation: Instead of labeling an entire image (which also limits each image to containing a single object), LabelMe allows annotation of multiple objects within an image by specifying a [[polygon]] bounding box that contains the object.
* Contains a large number of object classes and allows the creation of new classes easily.
* Diverse images: LabelMe contains images from many different scenes.
* Provides non-[[copyright]]ed images and allows public additions to the annotations. This creates a free environment.
 
==Annotation Tool==
<!-- Deleted image removed: [[Image:Annotation tool example.jpg|thumb|The LabelMe annotation tool]] -->
The LabelMe annotation tool provides a means for users to contribute to the project. The tool can be accessed anonymously or by logging in to a free account. To access the tool, users must have a compatible [[web browser]] with [[javascript]] support. When the tool is loaded, it chooses a random image from the LabelMe dataset and displays it on the screen. If the image already has object labels associated with it, they will be overlaid on top of the image in polygon format. Each distinct object label is displayed in a different color.
 
If the image is not completely labeled, the user can use the [[mouse (computing)|mouse]] to draw a polygon containing an object in the image. For example, in the image to the right, if a person was standing in front of the building, the user could click on a point on the border of the person, and continue clicking along the outside edge until returning to the starting point. Once the polygon is closed, a bubble pops up on the screen which allows the user to enter a label for the object. The user can choose whatever label the user thinks best describes the object. If the user disagrees with the previous labeling of the image, the user can click on the outline polygon of an object and either delete the polygon completely or edit the text label to give it a new name.
 
As soon as changes are made to the image by the user, they are saved and openly available for anyone to download from the LabelMe dataset. In this way, the data is always changing due to contributions by the community of users who use the tool. Once the user is finished with an image, the ''Show me another image'' link can be clicked and another random image will be selected to display to the user.
 
==Problems with the data==
The LabelMe dataset has some problems that should be noted. Some are inherent in the data, such as the objects in the images not being uniformly distributed with respect to size and image location. This is due to the images being primarily taken by humans who tend to focus the camera on interesting objects in a scene. However, cropping and rescaling the images randomly can simulate a uniform distribution.<ref>[[#Reference-idRussell2007|Russell et al. 2007]], Section 2.5</ref> Other problems are caused by the amount of freedom given to the users of the annotation tool. Some problems that arise are:
* The user can choose which objects in the scene to outline. Should an [[Hidden surface determination|occluded]] person be labeled? Should the sky be labeled?
* The user has to describe the shape of the object themselves by outlining a polygon. Should the fingers of a hand on a person be outlined with detail? How much precision must be used when outlining objects?
* The user chooses what text to enter as the label for the object. Should the label be ''person'', ''man'', or ''pedestrian''?
The creators of LabelMe decided to leave these decisions up to the annotator. The reason for this is that they believe people will tend to annotate the images according to what they think is the natural labeling of the images. This also provides some variability in the data, which can help researchers tune their [[algorithms]] to account for this variability.<ref>[[#Reference-idRussell2007|Russell et al. 2007]], Section 2.2</ref>
 
==Extending the data==
 
===Using WordNet===
<!--  Commented out because image was deleted: [[Image:Labelme_polygons_words.gif|right|Comparing polygon growth with word growth]] -->
Since the text labels for objects provided in LabelMe come from user input, there is a lot of variation in the labels used (as described above). Because of this, analysis of objects can be difficult. For example, a picture of a dog might be labeled as ''dog'', ''canine'', ''hound'', ''pooch'', or ''animal''. Ideally, when using the data, the object class ''dog'' at the abstract level should incorporate all of these text labels.
 
[[WordNet]] is a database of words organized into a structural way. It allows assigning a word to a category, or in WordNet language: a sense. Sense assignment is not easy to do automatically. When the authors of LabelMe tried automatic sense assignment, they found that it was prone to a high rate of error, so instead they assigned words to senses manually. At first, this may seem like a daunting task since new labels are added to the LabelMe project continuously. To the right is a graph comparing the growth of polygons to the growth of words (descriptions). As you can see, the growth of words is small compared with the continuous growth of polygons, and therefore is easy enough to keep up to date manually by the LabelMe team.<ref>[[#Reference-idRussell2007|Russell et al. 2007]], Section 3.1</ref>
 
Once WordNet assignment is done, searches in the LabelMe database are much more effective. For example, a search for ''animal'' might bring up pictures of ''dogs'', ''cats'' and ''snakes''. However, since the assignment was done manually, a picture of a computer mouse labeled as ''mouse'' would not show up in a search for ''animals''. Also, if objects are labeled with more complex terms like ''dog walking'', WordNet still allows the search of ''dog'' to return these objects as results. WordNet makes the LabelMe database much more useful.
 
===Object-part hierarchy===
<!-- Commented out because image was deleted: [[Image:labelme_part_labels.jpg|right|An example of part of an object ''building'']] -->
Having a large dataset of objects where overlap is allowed provides enough data to try and categorize objects as being a part of another object. For example, most of the labels assigned ''wheel'' are probably part of objects assigned to other labels like ''car'' or ''bicycle''. These are called '''part labels'''. To determine if label '''P''' is a '''part label''' for label '''O''':<ref>[[#Reference-idRussell2007|Russell et al. 2007]], Section 3.2</ref>
* Let <math>\mathrm{I}_\mathrm{O}\,</math> denote the set of images containing an object (e.g. car)
* Let <math>\mathrm{I}_\mathrm{P}\,</math> denote the set of images containing a part (e.g. wheel)
* Let the overlap score between object '''O''' and part '''P''', <math>\mathrm{S}_{\mathrm{O},\mathrm{P}}\,</math>, be defined as the ratio of the intersection area to the area of the part polygon. (e.g. <math>\frac{\mathrm{A}(\mathrm{O}\cap\mathrm{P})}{\mathrm{A}(\mathrm{P})}\,</math>)
* Let <math>\mathrm{I}_{\mathrm{O},\mathrm{P}} \subseteq \mathrm{I}_\mathrm{P}\,</math> denote the images where object and part polygons have <math>\mathrm{S}_{\mathrm{O},\mathrm{P}} > \beta\,</math> where <math>\beta\,</math> is some threshold value. The authors of LabelMe use <math>\beta=0.5\,</math>
* The object-part score for a candidate label is <math>\frac{\mathrm{N}_{\mathrm{O},\mathrm{P}}}{\mathrm{N}_\mathrm{P}+\alpha}\,</math> where <math>\mathrm{N}_{\mathrm{O},\mathrm{P}}\,</math> and <math>\mathrm{N}_\mathrm{P}\,</math> are the number of images in <math>\mathrm{I}_{\mathrm{O},\mathrm{P}}\,</math> and <math>\mathrm{I}_\mathrm{P}\,</math>, respectively, and <math>\alpha\,</math> is a concentration parameter. The authors of LabelMe use <math>\alpha=5\,</math>.
This algorithm allows the automatic classification of parts of an object when the part objects are frequently contained within the outer object.
 
===Object depth ordering===
Another instance of object overlap is when one object is actually on top of the other. For example, an image might contain a person standing in front of a building. The person is not a '''part label''' as above since the person is not part of the building. Instead, they are two separate objects that happen to overlap. To automatically determine which object is the foreground and which is the background, the authors of LabelMe propose several options:<ref>[[#Reference-idRussell2007|Russell et al. 2007]], Section 3.3</ref>
* If an object is completely contained within another object, then the inner object must be in the foreground. Otherwise, it would not be visible in the image. The only exception is with transparent or translucent objects, but these occur rarely.
* One of the objects could be labeled as something that cannot be in the foreground. Examples are ''sky'', ''ground'', or ''road''.
* The object with more polygon points inside the intersecting area is most likely the foreground. The authors tested this hypothesis and found it to be highly accurate.
* Histogram intersection<ref>[[#Reference-idSwain1991|Swain et al. 1991]]</ref> can be used. To do this, a [[color histogram]] in the intersecting areas is compared to the color histogram of the two objects. The object with the closer color histogram is assigned as the foreground. This method is less accurate than counting the polygon points.
 
==Matlab Toolbox==
The LabelMe project provides a set of tools for using the LabelMe dataset from Matlab. Since research is often done in Matlab, this allows the integration of the dataset with existing tools in computer vision. The entire dataset can be downloaded and used offline, or the toolbox allows dynamic downloading of content on demand.
 
==See also==
* [[MNIST database]]
* [[Caltech 101]]
 
==References==
{{Reflist}}
 
*{{wikicite|id=idRussell2007|reference=B. C. Russell, A. Torralba, K. P. Murphy, W. T. Freeman, ''LabelMe: a database and web-based tool for image annotation.'' MIT AI Lab Memo AIM-2005-025, September, 2005. [http://people.csail.mit.edu/brussell/research/AIM-2005-025-new.pdf PDF]}}
*{{wikicite|id=idSwain1991|reference=M. J. Swain and D. H. Ballard. Color indexing. International Journal of Computer Vision, 7(1),1991.}}
 
==External links==
* http://labelme.csail.mit.edu/ - LabelMe - The open annotation tool
* http://people.csail.mit.edu/torralba/research/LabelMe/js/LabelMeQueryObjectFast.cgi - Search LabelMe objects
* http://labelme.csail.mit.edu/tool.html - Contribute to the LabelMe project
* http://labelme.csail.mit.edu/LabelMeToolbox/index.html - LabelMe Matlab Toolbox
 
[[Category:Datasets in computer vision]]
[[Category:Object recognition and categorization]]

Revision as of 23:46, 24 October 2013

LabelMe is a project created by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) which provides a dataset of digital images with annotations. The dataset is dynamic, free to use, and open to public contribution. The most applicable use of LabelMe is in computer vision research. As of October 31, 2010, LabelMe has 187,240 images, 62,197 annotated images, and 658,992 labeled objects.

Motivation

The motivation behind creating LabelMe comes from the history of publicly available data for computer vision researchers. Most available data was tailored to a specific research group's problems and caused new researchers to have to collect additional data to solve their own problems. LabelMe was created to solve several common shortcomings of available data. The following is a list of qualities that distinguish LabelMe from previous work.

  • Designed for recognition of a class of objects instead of single instances of an object. For example, a traditional dataset may have contained images of dogs, each of the same size and orientation. In contrast, LabelMe contains images of dogs in multiple angles, sizes, and orientations.
  • Designed for recognizing objects embedded in arbitrary scenes instead of images that are cropped, normalized, and/or resized to display a single object.
  • Complex annotation: Instead of labeling an entire image (which also limits each image to containing a single object), LabelMe allows annotation of multiple objects within an image by specifying a polygon bounding box that contains the object.
  • Contains a large number of object classes and allows the creation of new classes easily.
  • Diverse images: LabelMe contains images from many different scenes.
  • Provides non-copyrighted images and allows public additions to the annotations. This creates a free environment.

Annotation Tool

The LabelMe annotation tool provides a means for users to contribute to the project. The tool can be accessed anonymously or by logging in to a free account. To access the tool, users must have a compatible web browser with javascript support. When the tool is loaded, it chooses a random image from the LabelMe dataset and displays it on the screen. If the image already has object labels associated with it, they will be overlaid on top of the image in polygon format. Each distinct object label is displayed in a different color.

If the image is not completely labeled, the user can use the mouse to draw a polygon containing an object in the image. For example, in the image to the right, if a person was standing in front of the building, the user could click on a point on the border of the person, and continue clicking along the outside edge until returning to the starting point. Once the polygon is closed, a bubble pops up on the screen which allows the user to enter a label for the object. The user can choose whatever label the user thinks best describes the object. If the user disagrees with the previous labeling of the image, the user can click on the outline polygon of an object and either delete the polygon completely or edit the text label to give it a new name.

As soon as changes are made to the image by the user, they are saved and openly available for anyone to download from the LabelMe dataset. In this way, the data is always changing due to contributions by the community of users who use the tool. Once the user is finished with an image, the Show me another image link can be clicked and another random image will be selected to display to the user.

Problems with the data

The LabelMe dataset has some problems that should be noted. Some are inherent in the data, such as the objects in the images not being uniformly distributed with respect to size and image location. This is due to the images being primarily taken by humans who tend to focus the camera on interesting objects in a scene. However, cropping and rescaling the images randomly can simulate a uniform distribution.[1] Other problems are caused by the amount of freedom given to the users of the annotation tool. Some problems that arise are:

  • The user can choose which objects in the scene to outline. Should an occluded person be labeled? Should the sky be labeled?
  • The user has to describe the shape of the object themselves by outlining a polygon. Should the fingers of a hand on a person be outlined with detail? How much precision must be used when outlining objects?
  • The user chooses what text to enter as the label for the object. Should the label be person, man, or pedestrian?

The creators of LabelMe decided to leave these decisions up to the annotator. The reason for this is that they believe people will tend to annotate the images according to what they think is the natural labeling of the images. This also provides some variability in the data, which can help researchers tune their algorithms to account for this variability.[2]

Extending the data

Using WordNet

Since the text labels for objects provided in LabelMe come from user input, there is a lot of variation in the labels used (as described above). Because of this, analysis of objects can be difficult. For example, a picture of a dog might be labeled as dog, canine, hound, pooch, or animal. Ideally, when using the data, the object class dog at the abstract level should incorporate all of these text labels.

WordNet is a database of words organized into a structural way. It allows assigning a word to a category, or in WordNet language: a sense. Sense assignment is not easy to do automatically. When the authors of LabelMe tried automatic sense assignment, they found that it was prone to a high rate of error, so instead they assigned words to senses manually. At first, this may seem like a daunting task since new labels are added to the LabelMe project continuously. To the right is a graph comparing the growth of polygons to the growth of words (descriptions). As you can see, the growth of words is small compared with the continuous growth of polygons, and therefore is easy enough to keep up to date manually by the LabelMe team.[3]

Once WordNet assignment is done, searches in the LabelMe database are much more effective. For example, a search for animal might bring up pictures of dogs, cats and snakes. However, since the assignment was done manually, a picture of a computer mouse labeled as mouse would not show up in a search for animals. Also, if objects are labeled with more complex terms like dog walking, WordNet still allows the search of dog to return these objects as results. WordNet makes the LabelMe database much more useful.

Object-part hierarchy

Having a large dataset of objects where overlap is allowed provides enough data to try and categorize objects as being a part of another object. For example, most of the labels assigned wheel are probably part of objects assigned to other labels like car or bicycle. These are called part labels. To determine if label P is a part label for label O:[4]

  • Let IO denote the set of images containing an object (e.g. car)
  • Let IP denote the set of images containing a part (e.g. wheel)
  • Let the overlap score between object O and part P, SO,P, be defined as the ratio of the intersection area to the area of the part polygon. (e.g. A(OP)A(P))
  • Let IO,PIP denote the images where object and part polygons have SO,P>β where β is some threshold value. The authors of LabelMe use β=0.5
  • The object-part score for a candidate label is NO,PNP+α where NO,P and NP are the number of images in IO,P and IP, respectively, and α is a concentration parameter. The authors of LabelMe use α=5.

This algorithm allows the automatic classification of parts of an object when the part objects are frequently contained within the outer object.

Object depth ordering

Another instance of object overlap is when one object is actually on top of the other. For example, an image might contain a person standing in front of a building. The person is not a part label as above since the person is not part of the building. Instead, they are two separate objects that happen to overlap. To automatically determine which object is the foreground and which is the background, the authors of LabelMe propose several options:[5]

  • If an object is completely contained within another object, then the inner object must be in the foreground. Otherwise, it would not be visible in the image. The only exception is with transparent or translucent objects, but these occur rarely.
  • One of the objects could be labeled as something that cannot be in the foreground. Examples are sky, ground, or road.
  • The object with more polygon points inside the intersecting area is most likely the foreground. The authors tested this hypothesis and found it to be highly accurate.
  • Histogram intersection[6] can be used. To do this, a color histogram in the intersecting areas is compared to the color histogram of the two objects. The object with the closer color histogram is assigned as the foreground. This method is less accurate than counting the polygon points.

Matlab Toolbox

The LabelMe project provides a set of tools for using the LabelMe dataset from Matlab. Since research is often done in Matlab, this allows the integration of the dataset with existing tools in computer vision. The entire dataset can be downloaded and used offline, or the toolbox allows dynamic downloading of content on demand.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

External links