Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
{{ distinguish|a priori probability}}
{{Redirect|C-space|the art gallery|C-Space, Beijing}}
{{other uses|PCI configuration space}}


{{No footnotes|article|date=February 2008}}
In [[classical mechanics]], the parameters that define the configuration of a system are called ''[[generalized coordinates]],'' and the vector space defined by these coordinates is called the '''configuration space''' of the [[physical system]].  It is often the case that these parameters satisfy mathematical constraints, which means that the set of actual configurations of the system is a manifold in the space of generalized coordinates.  This [[manifold]] is called the '''configuration manifold''' of the system.


{{Bayesian statistics}}
==Configuration spaces in physics==
The configuration space of a single particle moving in ordinary [[Euclidean space|Euclidean 3-space]] is just '''R'''<sup>3</sup>. For ''n'' particles the configuration space is '''R'''<sup>3''n''</sup>, or possibly the subspace where no two positions are equal. More generally, one can regard the configuration space of ''n'' particles moving in a manifold ''M'' as the [[function space]] ''M''<sup>''n''</sup>.


In [[Bayesian probability|Bayesian]] [[statistical inference]], a '''prior probability distribution''', often called simply the '''prior''', of an uncertain quantity ''p'' is the [[probability distribution]] that would express one's uncertainty about ''p'' before some evidence is taken into account. For example, ''p'' could be the proportion of voters who will vote for a particular politician in a future election. It is meant to attribute uncertainty rather than randomness to the uncertain quantity. The unknown quantity may be a [[parameter]] or [[latent variable]].
To take account of both position and momenta one moves to the [[cotangent bundle]] of the configuration manifold. This larger manifold is called the [[phase space]] of the system. In short, a configuration space is typically "half" of (see [[Lagrangian distribution]]) a [[phase space]] that is constructed from a function space.


One applies [[Bayes' theorem]], multiplying the prior by the [[likelihood function]] and then normalizing, to get the ''[[posterior probability distribution]]'', which is the conditional distribution of the uncertain quantity given the data.
In [[quantum mechanics]] one [[path integral formulation|formulation]] emphasises 'histories' as configurations.
===Robotics===
In Robotics, ''configuration space'' generally refers to the set of positions reachable by a robot's [[end-effector]] considered to be a rigid body in three dimensional space.<ref>John J. Craig, '''Introduction to Robotics: Mechanics and Control''', 3rd Ed. Prentice-Hall, 2004</ref>  Thus, the positions of the end-effector of a robot can be identified with the group of spatial rigid transformations, often denoted SE(3).


A prior is often the purely subjective assessment of an experienced expert. Some will choose a ''[[conjugate prior]]'' when they can, to make calculation of the posterior distribution easier.
The joint parameters of the robot are used as generalized coordinates to define its configurations.  The set of joint parameter values is called the ''joint space''.   The robot's [[forward kinematics|forward]] and [[inverse kinematics]] equations define mappings between its configurations and its end-effector positions, or between joint space and configuration space.  Robot [[motion planning]] uses these mappings to find a path in joint space that provides a desired path in the configuration space of the end-effector.


Parameters of prior distributions are called ''[[hyperparameter]]s,'' to distinguish them from parameters of the model of the underlying data. For instance, if one is using a [[beta distribution]] to model the distribution of the parameter ''p'' of a [[Bernoulli distribution]], then:
== Configuration spaces in mathematics ==
* ''p'' is a parameter of the underlying system (Bernoulli distribution), and
[[File:Moebius Surface 1 Display Small.png|thumb|The configuration space of 2 not necessarily distinct points on the circle is the [[orbifold]] <math>T^2/S_2,</math> which is the [[Möbius strip]].]]
* ''α'' and ''β'' are parameters of the prior distribution (beta distribution), hence ''hyper''parameters.
<!-- Deleted image removed: [[File:Topological III by Robert R. Wilson, at Harvard University.JPG|upright|thumb|The configuration space of 3 not necessarily distinct points on the circle <math>T^3/S_3,</math> is the above orbifold.]] -->


== Informative priors ==
In [[mathematics]] a '''configuration space''' refers to a broad family of constructions closely related to the '''[[state space]]''' notion in physics.  The most common notion of '''configuration space''' in mathematics <math>C_n X</math> is the set of ''n''-element subsets of a [[topological space]] <math>X</math>.  This set is given a [[topological space|topology]] by considering it as the [[quotient space|quotient]] <math>C_n X = F_n X / \Sigma_n</math> where <math>F_n X = \{(x_1,\cdots,x_n) \in X^n : x_i \neq x_j \forall \ i \neq j \}</math> and <math>\Sigma_n</math> is the [[symmetric group]] acting by permuting the coordinates of <math>F_n X</math>.  Typically, <math>C_n X</math> is called the configuration space of ''n'' unordered points in <math>X</math> and <math>F_n X</math> is called the configuration space of ''n'' ordered or coloured points in <math>X</math>; the space of ''n'' ordered not necessarily distinct points is simply <math>X^n.</math>


An ''informative prior'' expresses specific, definite information about a variable.
If the original space is a manifold, the configuration space of ''distinct,'' unordered points is also a manifold, while the configuration space of ''not necessarily distinct'' unordered points is instead an [[orbifold]].
An example is a prior distribution for the temperature at noon tomorrow.
A reasonable approach is to make the prior a [[normal distribution]] with [[expected value]] equal to today's noontime temperature, with [[variance]] equal to the day-to-day variance of atmospheric temperature,
or a distribution of the temperature for that day of the year.


This example has a property in common with many priors,
Configuration spaces are related to [[braid theory]], where the [[braid group]] is considered as the [[fundamental group]] of the space <math>C_n \Bbb R^2</math>.
namely, that the posterior from one problem (today's temperature) becomes the prior for another problem (tomorrow's temperature); pre-existing evidence which has already been taken into account is part of the prior and as more evidence accumulates the prior is determined largely by the evidence rather than any original assumption, provided that the original assumption admitted the possibility of what the evidence is suggesting. The terms "prior" and "posterior" are generally relative to a specific datum or observation.


== Uninformative priors ==<!-- This section is linked from [[Non-informative prior]] -->
A configuration space is a type of [[classifying space]] or (fine) [[moduli space]].  In particular, there is a universal bundle <math> \pi\colon E_n\to C_n </math> which is a subbundle of the trivial bundle <math> C_n\times X^n\to C_n</math>, and which has the property that the fiber over each point <math> p\in C_n</math> is the ''n'' element subset of <math> X_n </math> classified by ''p''. 


An ''uninformative prior'' expresses vague or general information about a variable.
The homotopy type of configuration spaces is not [[homotopy invariant]] – for example, that the spaces <math>F_n \Bbb R^m</math> are not homotopic for any two distinct values of <math>m</math>. For instance, <math>F_n\Bbb R</math> is not connected, <math>F_n\Bbb R^2</math> is a <math>K(\pi,1)</math>, and <math>F_n \Bbb R^m</math> is simply connected for <math> m \geq 3</math>.  
The term "uninformative prior" may be somewhat of a misnomer; often, such a prior might be called a ''not very informative prior'', or an ''objective prior'', i.e. one that's not subjectively elicited.
Uninformative priors can express "objective" information such as "the variable is positive" or "the variable is less than some limit".


The simplest and oldest rule for determining a non-informative prior is the [[principle of indifference]], which assigns equal probabilities to all possibilities.
It used to be an open question whether there were examples of ''compact'' manifolds which were homotopic but had non-homotopic configuration spaces: such an example was found only in 2005 by Longini and Salvatore. Their example are two three-dimensional [[lens space]]s, and the configuration spaces of at least two points in them. That these configuration spaces are not homotopic was detected by [[Massey product]]s in their respective universal covers.<ref>{{citation|last1= Salvatore| first1=Paolo| last2=Longoni| first2=Riccardo| title=Configuration spaces are not homotopy invariant| journal= Topology |volume= 44| year=2005|issue= 2|pages=375&ndash;380|doi= 10.1016/j.top.2004.11.002}}</ref>


In parameter estimation problems, the use of an uninformative prior typically yields results which are not too different from conventional statistical analysis, as the likelihood function often yields more information than the uninformative prior.
==See also==
 
*[[Feature space]] (topic in pattern recognition)
Some attempts have been made at finding [[a priori probability|a priori probabilities]], i.e. probability distributions in some sense logically required by the nature of one's state of uncertainty; these are a subject of philosophical controversy, with Bayesians being roughly divided into two schools: "objective Bayesians", who believe such priors exist in many useful situations, and "subjective Bayesians" who believe that in practice priors usually represent subjective judgements of opinion that cannot be rigorously justified (Williamson 2010).  Perhaps the strongest arguments for objective Bayesianism were given by [[Edwin T. Jaynes]], based mainly on the consequences of symmetries and on the principle of maximum entropy.
*[[Parameter space]]
 
*[[Phase space]]
As an example of an a priori prior, due to Jaynes (2003), consider a situation in which one knows a ball has been hidden under one of three cups, A, B or C, but no other information is available about its location.  In this case a ''uniform prior'' of ''p''(''A'')&nbsp;= ''p''(''B'')&nbsp;= ''p''(''C'')&nbsp;= 1/3 seems intuitively like the only reasonable choice.  More formally, we can see that the problem remains the same if we swap around the labels ("A", "B" and "C") of the cups.  It would therefore be odd to choose a prior for which a permutation of the labels would cause a change in our predictions about which cup the ball will be found under; the uniform prior is the only one which preserves this invariance.  If one accepts this invariance principle then one can see that the uniform prior is the logically correct prior to represent this state of knowledge.  It should be noted that this prior is "objective" in the sense of being the correct choice to represent a particular state of knowledge, but it is not objective in the sense of being an observer-independent feature of the world: in reality the ball exists under a particular cup, and it only makes sense to speak of probabilities in this situation if there is an observer with limited knowledge about the system.
*[[State space (physics)]]
 
As a more contentious example, Jaynes published an argument (Jaynes 1968) based on [[Lie group]]s that
suggests that the prior representing complete uncertainty about a probability should be the [[Haldane prior]] ''p''<sup>&minus;1</sup>(1&nbsp;&minus;&nbsp;''p'')<sup>&minus;1</sup>.  The example Jaynes gives is of finding a chemical in a lab and asking whether it will dissolve in water in repeated experiments.  The Haldane prior<ref>This prior was proposed by [[J.B.S. Haldane]] in "A note on inverse probability", Mathematical Proceedings of the Cambridge Philosophical Society 28, 55–61, 1932, available online at http://journals.cambridge.org/action/displayAbstract?aid=1733860. See also J. Haldane, "The precision of observed values of small frequencies", Biometrika, 35:297–300, 1948, available online at http://www.jstor.org/pss/2332350.</ref> gives by far the most weight to <math>p=0</math> and <math>p=1</math>, indicating that the sample will either dissolve every time or never dissolve, with equal probability.  However, if one has observed samples of the chemical to dissolve in one experiment and not to dissolve in another experiment then this prior is updated to the [[uniform distribution (continuous)|uniform distribution]] on the interval [0, 1].  This is obtained by applying [[Bayes' theorem]] to the data set consisting of one observation of dissolving and one of not dissolving, using the above prior.  The Haldane prior has been criticized{{By whom|date=July 2010}} on the grounds that it yields an improper posterior distribution that puts 100% of the probability content at either ''p'' = 0 or at ''p'' = 1 if a finite number of observations have given the same result.  The [[Jeffreys prior]] ''p''<sup>&minus;1/2</sup>(1&nbsp;&minus;&nbsp;''p'')<sup>&minus;1/2</sup> is therefore preferred{{By whom|date=July 2010}} (see below).
 
Priors can be constructed which are proportional to the [[Haar measure]] if the parameter space ''X'' carries a [[transformation group|natural group structure]] which leaves invariant our Bayesian state of knowledge (Jaynes, 1968). This can be seen as a generalisation of the invariance principle used to justify the uniform prior over the three cups in the example above.  For example, in physics we might expect that an experiment will give the same results regardless of our choice of the origin of a coordinate system. This induces the group structure of the [[translation group]] on ''X'', which determines the prior probability as a constant [[improper prior]]. Similarly, some measurements are naturally invariant to the choice of an arbitrary scale (i.e., it doesn't matter if we use centimeters or inches, we should get results that are physically the same). In such a case, the scale group is the natural group structure, and the corresponding prior on ''X'' is proportional to 1/''x''. It sometimes matters whether we use the left-invariant or right-invariant Haar measure. For example, the left and right invariant Haar measures on the [[affine group]] are not equal. Berger (1985, p.&nbsp;413) argues that the right-invariant Haar measure is the correct choice.
 
Another idea, championed by [[Edwin T. Jaynes]], is to use the [[principle of maximum entropy]] (MAXENT). The motivation is that the [[Shannon entropy]] of a probability distribution measures the amount of information contained in the distribution. The larger the entropy, the less information is provided by the distribution. Thus, by maximizing the entropy over a suitable set of probability distributions on ''X'', one finds the distribution that is least informative in the sense that it contains the least amount of information consistent with the constraints that define the set. For example, the maximum entropy prior on a discrete space, given only that the probability is normalized to 1, is the prior that assigns equal probability to each state. And in the continuous case, the maximum entropy prior given that the density is normalized with mean zero and variance unity is the standard [[normal distribution]].  The principle of ''[[minxent|minimum cross-entropy]]'' generalizes MAXENT to the case of "updating" an arbitrary prior distribution with suitability constraints in the maximum-entropy sense.
 
A related idea, [[reference prior]]s, was introduced by [[José-Miguel Bernardo]]. Here, the idea is to maximize the expected [[Kullback–Leibler divergence]] of the posterior distribution relative to the prior. This maximizes the expected posterior information about ''X'' when the prior density is ''p''(''x''); thus, in some sense, ''p''(''x'') is the "least informative" prior about X. The reference prior is defined in the asymptotic limit, i.e., one considers the limit of the priors so obtained as the number of data points goes to infinity. Reference priors are often the objective prior of choice in multivariate problems, since other rules (e.g., [[Jeffreys prior|Jeffreys' rule]]) may result in priors with problematic behavior.
 
Objective prior distributions may also be derived from other principles, such as [[information theory|information]] or [[coding theory]] (see e.g. [[minimum description length]]) or [[frequentist statistics]] (see [[frequentist matching]]).
 
Philosophical problems associated with uninformative priors are associated with the choice of an appropriate metric, or measurement scale. Suppose we want a prior for the running speed of a runner who is unknown to us. We could specify, say, a normal distribution as the prior for his speed, but alternatively we could specify a normal prior for the time he takes to complete 100 metres, which is proportional to the reciprocal of the first prior. These are very different priors, but it is not clear which is to be preferred.  Jaynes' often-overlooked method of transformation groups can answer this question in some situations.<ref>Jaynes (1968), pp. 17, see also Jaynes (2003), chapter 12.  Note that chapter 12 is not available in the online preprint but can be previewed via Google Books.</ref>
 
Similarly, if asked to estimate an unknown proportion between 0 and 1, we might say that all proportions are equally likely and use a uniform prior. Alternatively, we might say that all orders of magnitude for the proportion are equally likely, the '''{{visible anchor|logarithmic prior}}''', which is the uniform prior on the logarithm of proportion. The [[Jeffreys prior]] attempts to solve this problem by computing a prior which expresses the same belief no matter which metric is used. The Jeffreys prior for an unknown proportion ''p'' is ''p''<sup>&minus;1/2</sup>(1&nbsp;&minus;&nbsp;''p'')<sup>&minus;1/2</sup>, which differs from Jaynes' recommendation.
 
Priors based on notions of [[algorithmic probability]] are used in [[inductive inference]] as a basis for induction in very general settings.
 
Practical problems associated with uninformative priors include the requirement that the posterior distribution be proper. The usual uninformative priors on continuous, unbounded variables are improper. This need not be a problem if the posterior distribution is proper. Another issue of importance is that if an uninformative prior is to be used ''routinely'', i.e., with many different data sets, it should have good [[frequentist]] properties. Normally a [[Bayesian probability|Bayesian]] would not be concerned with such issues, but it can be important in this situation. For example, one would want any [[decision theory|decision rule]] based on the posterior distribution to be [[admissible decision rule|admissible]] under the adopted loss function. Unfortunately, admissibility is often difficult to check, although some results are known (e.g., Berger and Strawderman 1996). The issue is particularly acute with [[hierarchical Bayes model]]s; the usual priors (e.g., Jeffreys' prior) may give badly inadmissible decision rules if employed at the higher levels of the hierarchy.
 
==Improper priors==
 
If Bayes' theorem is written as
:<math>P(A_i|B) = \frac{P(B | A_i) P(A_i)}{\sum_j P(B|A_j)P(A_j)}\, ,</math>
then it is clear that the same result would be obtained if all the prior probabilities ''P''(''A''<sub>''i''</sub>) and ''P''(''A''<sub>''j''</sub>) were multiplied by a given constant; the same would be true for a [[continuous random variable]].  If the summation in the denominator converges, the posterior probabilities will still sum (or integrate) to 1 even if the prior values do not, and so the priors may only need to be specified in the correct proportion. Taking this idea further, in many cases the sum or integral of the prior values may not even need to be finite to get sensible answers for the posterior probabilities.  When this is the case, the prior is called an '''improper prior'''.  However, the posterior distribution need not be a proper distribution if the prior is improper. This is clear from the case where event ''B'' is independent of all of the ''A''<sub>''j''</sub>.
 
Some statisticians{{Citation needed|date=December 2008}} use improper priors as [[uninformative prior]]s.  For example, if they need a prior distribution for the mean and variance of a random variable, they may assume ''p''(''m'',&nbsp;''v'')&nbsp;~&nbsp;1/''v'' (for ''v''&nbsp;>&nbsp;0) which would suggest that any value for the mean is "equally likely" and that a value for the positive variance becomes "less likely" in inverse proportion to its value.  Many authors (Lindley, 1973; De Groot, 1937; Kass and Wasserman, 1996){{Citation needed|date=December 2008}} warn against the danger of over-interpreting those priors since they are not probability densities. The only relevance they have is found in the corresponding posterior, as long as it is well-defined for all observations. (The [[Beta distribution#Haldane.27s prior probability .28 Beta.280.2C0.29 .29|Haldane prior]] is a typical counterexample.{{Clarify|reason=counterexample of what?|date=May 2011}}{{Citation needed|date=May 2011}})
 
=== Examples ===
Examples of improper priors include:
* Beta(0,0), the [[beta distribution]] for α=0, β=0.
* The [[uniform distribution (continuous)|uniform distribution]] on an infinite interval (i.e., a half-line or the entire real line).
* The logarithmic prior on the positive reals.{{Citation needed|date=October 2010}}
 
==Other priors==
The concept of [[algorithmic probability]] provides a route to specifying prior probabilities based on the relative complexity of the alternative models being considered.
 
== Notes ==


== References ==
<references/>
<references/>


== References ==
==External links==
 
* [http://www.overcomingbias.com/2008/04/conf-space.html Intuitive Explanation of Classical Configuration Spaces].
* {{cite book |author=Rubin, Donald B.; [[Andrew Gelman|Gelman, Andrew]]; John B. Carlin; Stern, Hal |title=Bayesian Data Analysis |edition=2nd |publisher=Chapman & Hall/CRC |location=Boca Raton |year=2003 |pages= |isbn=1-58488-388-X |doi= |accessdate= |mr=2027492 }}
*''[http://ford.ieor.berkeley.edu/cspace'' Interactive Visualization of the C-space for a Robot Arm with Two Rotational Links] from [[UC Berkeley]].
* {{cite book |last=Berger |first=James O. |title=Statistical decision theory and Bayesian analysis |publisher=Springer-Verlag |location=Berlin |year=1985 |pages= |isbn=0-387-96098-8 |doi= |accessdate= |mr=0804611 }}
* [http://www.youtube.com/watch?v=SBFwgR4K1Gk&list=UUswRb5tFvit2fXAiZtwpYuA&index=1&feature=plcp Configuration Space Visualization] from [[Free University of Berlin]]
* {{cite journal
{{DEFAULTSORT:Configuration Space}}
|first1=James O. |last1=Berger
[[Category:Classical mechanics]]
|first2=William E. |last2=Strawderman
[[Category:Manifolds]]
|title=Choice of hierarchical priors: admissibility in estimation of normal means
[[Category:Topology]]
|journal=[[Annals of Statistics]]
|volume=24 |issue=3 |pages=931&ndash;951 |year=1996
|doi=10.1214/aos/1032526950
|mr=1401831 | zbl = 0865.62004
}}
* {{cite journal |first=Jose M. |last=Bernardo
|title=Reference Posterior Distributions for Bayesian Inference
|journal=[[Journal of the Royal Statistical Society]], Series B
|volume=41 |issue= 2 |pages=113&ndash;147 |year=1979
|mr=0547240 | jstor = 2985028
}}
* {{cite journal | title=The formal definition of reference priors
|author1=James O. Berger
|authorlink1=James Berger (statistician)
|author2=José M. Bernardo
|authorlink2=José-Miguel Bernardo
|author3=Dongchu Sun
|journal=Annals of Statistics
|year=2009
|volume=37
|issue=2
|pages=905–938
|arxiv=0904.0156 | doi=10.1214/07-AOS587
}}
* {{cite journal|last=Jaynes |first=Edwin T. |authorlink= Edwin T. Jaynes |title=Prior Probabilities |journal=IEEE Transactions on Systems Science and Cybernetics |volume=4 |issue=3 |pages=227&ndash;241 |date=Sep 1968 |doi=10.1109/TSSC.1968.300117 |url=http://bayes.wustl.edu/etj/articles/prior.pdf |accessdate=2009-03-27}}
** Reprinted in {{cite book |author=Rosenkrantz, Roger D. |title=E. T. Jaynes: papers on probability, statistics, and statistical physics |publisher=Kluwer Academic Publishers |location=Boston |year=1989 |isbn=90-277-1448-7 |doi= |accessdate= |pages=116&ndash;130}}
* {{cite book |last=Jaynes |first=Edwin T. |authorlink= Edwin T. Jaynes |title= Probability Theory: The Logic of Science |publisher=Cambridge University Press |year=2003 |pages= |isbn=0-521-59271-2 |doi= |accessdate= |url=http://www-biba.inrialpes.fr/Jaynes/prob.html }}
* {{cite journal|last=Williamson |first=Jon |title=review of Bruno di Finetti. Philosophical Lectures on Probability |journal=Philosophia Mathematica |volume=18 |issue=1 |pages=130&ndash;135 |year=2010 |doi=10.1093/philmat/nkp019  |url=http://www.kent.ac.uk/secl/philosophy/jw/2009/deFinetti.pdf |accessdate=2010-07-02}}


{{DEFAULTSORT:Prior Probability}}
[[ca:Espai de configuració]]
[[Category:Bayesian statistics]]
[[cs:Konfigurační prostor]]
[[Category:Probability assessment]]
[[de:Konfigurationsraum]]
[[es:Espacio de configuración]]
[[fr:Espace de configuration]]
[[ko:짜임새 공간]]
[[it:Spazio delle configurazioni]]
[[nl:Configuratieruimte]]
[[pt:Espaço de configuração]]
[[ru:Пространство конфигураций]]
[[zh:位形空间]]

Revision as of 03:27, 12 August 2014

Name: Jodi Junker
My age: 32
Country: Netherlands
Home town: Oudkarspel
Post code: 1724 Xg
Street: Waterlelie 22

my page - www.hostgator1centcoupon.info I'm Fernando (21) from Seltjarnarnes, Iceland.
I'm learning Norwegian literature at a local college and I'm just about to graduate.
I have a part time job in a the office.

my site; wellness [continue reading this..]

In classical mechanics, the parameters that define the configuration of a system are called generalized coordinates, and the vector space defined by these coordinates is called the configuration space of the physical system. It is often the case that these parameters satisfy mathematical constraints, which means that the set of actual configurations of the system is a manifold in the space of generalized coordinates. This manifold is called the configuration manifold of the system.

Configuration spaces in physics

The configuration space of a single particle moving in ordinary Euclidean 3-space is just R3. For n particles the configuration space is R3n, or possibly the subspace where no two positions are equal. More generally, one can regard the configuration space of n particles moving in a manifold M as the function space Mn.

To take account of both position and momenta one moves to the cotangent bundle of the configuration manifold. This larger manifold is called the phase space of the system. In short, a configuration space is typically "half" of (see Lagrangian distribution) a phase space that is constructed from a function space.

In quantum mechanics one formulation emphasises 'histories' as configurations.

Robotics

In Robotics, configuration space generally refers to the set of positions reachable by a robot's end-effector considered to be a rigid body in three dimensional space.[1] Thus, the positions of the end-effector of a robot can be identified with the group of spatial rigid transformations, often denoted SE(3).

The joint parameters of the robot are used as generalized coordinates to define its configurations. The set of joint parameter values is called the joint space. The robot's forward and inverse kinematics equations define mappings between its configurations and its end-effector positions, or between joint space and configuration space. Robot motion planning uses these mappings to find a path in joint space that provides a desired path in the configuration space of the end-effector.

Configuration spaces in mathematics

The configuration space of 2 not necessarily distinct points on the circle is the orbifold which is the Möbius strip.

In mathematics a configuration space refers to a broad family of constructions closely related to the state space notion in physics. The most common notion of configuration space in mathematics is the set of n-element subsets of a topological space . This set is given a topology by considering it as the quotient where and is the symmetric group acting by permuting the coordinates of . Typically, is called the configuration space of n unordered points in and is called the configuration space of n ordered or coloured points in ; the space of n ordered not necessarily distinct points is simply

If the original space is a manifold, the configuration space of distinct, unordered points is also a manifold, while the configuration space of not necessarily distinct unordered points is instead an orbifold.

Configuration spaces are related to braid theory, where the braid group is considered as the fundamental group of the space .

A configuration space is a type of classifying space or (fine) moduli space. In particular, there is a universal bundle which is a subbundle of the trivial bundle , and which has the property that the fiber over each point is the n element subset of classified by p.

The homotopy type of configuration spaces is not homotopy invariant – for example, that the spaces are not homotopic for any two distinct values of . For instance, is not connected, is a , and is simply connected for .

It used to be an open question whether there were examples of compact manifolds which were homotopic but had non-homotopic configuration spaces: such an example was found only in 2005 by Longini and Salvatore. Their example are two three-dimensional lens spaces, and the configuration spaces of at least two points in them. That these configuration spaces are not homotopic was detected by Massey products in their respective universal covers.[2]

See also

References

  1. John J. Craig, Introduction to Robotics: Mechanics and Control, 3rd Ed. Prentice-Hall, 2004
  2. Many property agents need to declare for the PIC grant in Singapore. However, not all of them know find out how to do the correct process for getting this PIC scheme from the IRAS. There are a number of steps that you need to do before your software can be approved.

    Naturally, you will have to pay a safety deposit and that is usually one month rent for annually of the settlement. That is the place your good religion deposit will likely be taken into account and will kind part or all of your security deposit. Anticipate to have a proportionate amount deducted out of your deposit if something is discovered to be damaged if you move out. It's best to you'll want to test the inventory drawn up by the owner, which can detail all objects in the property and their condition. If you happen to fail to notice any harm not already mentioned within the inventory before transferring in, you danger having to pay for it yourself.

    In case you are in search of an actual estate or Singapore property agent on-line, you simply should belief your intuition. It's because you do not know which agent is nice and which agent will not be. Carry out research on several brokers by looking out the internet. As soon as if you end up positive that a selected agent is dependable and reliable, you can choose to utilize his partnerise in finding you a home in Singapore. Most of the time, a property agent is taken into account to be good if he or she locations the contact data on his website. This may mean that the agent does not mind you calling them and asking them any questions relating to new properties in singapore in Singapore. After chatting with them you too can see them in their office after taking an appointment.

    Have handed an trade examination i.e Widespread Examination for House Brokers (CEHA) or Actual Property Agency (REA) examination, or equal; Exclusive brokers are extra keen to share listing information thus making certain the widest doable coverage inside the real estate community via Multiple Listings and Networking. Accepting a severe provide is simpler since your agent is totally conscious of all advertising activity related with your property. This reduces your having to check with a number of agents for some other offers. Price control is easily achieved. Paint work in good restore-discuss with your Property Marketing consultant if main works are still to be done. Softening in residential property prices proceed, led by 2.8 per cent decline within the index for Remainder of Central Region

    Once you place down the one per cent choice price to carry down a non-public property, it's important to accept its situation as it is whenever you move in – faulty air-con, choked rest room and all. Get round this by asking your agent to incorporate a ultimate inspection clause within the possibility-to-buy letter. HDB flat patrons routinely take pleasure in this security net. "There's a ultimate inspection of the property two days before the completion of all HDB transactions. If the air-con is defective, you can request the seller to repair it," says Kelvin.

    15.6.1 As the agent is an intermediary, generally, as soon as the principal and third party are introduced right into a contractual relationship, the agent drops out of the image, subject to any problems with remuneration or indemnification that he could have against the principal, and extra exceptionally, against the third occasion. Generally, agents are entitled to be indemnified for all liabilities reasonably incurred within the execution of the brokers´ authority.

    To achieve the very best outcomes, you must be always updated on market situations, including past transaction information and reliable projections. You could review and examine comparable homes that are currently available in the market, especially these which have been sold or not bought up to now six months. You'll be able to see a pattern of such report by clicking here It's essential to defend yourself in opposition to unscrupulous patrons. They are often very skilled in using highly unethical and manipulative techniques to try and lure you into a lure. That you must also protect your self, your loved ones, and personal belongings as you'll be serving many strangers in your home. Sign a listing itemizing of all of the objects provided by the proprietor, together with their situation. HSR Prime Recruiter 2010

External links

ca:Espai de configuració cs:Konfigurační prostor de:Konfigurationsraum es:Espacio de configuración fr:Espace de configuration ko:짜임새 공간 it:Spazio delle configurazioni nl:Configuratieruimte pt:Espaço de configuração ru:Пространство конфигураций zh:位形空间