788 resultados para RST-invariant object representation
Resumo:
The extension of traditional data mining methods to time series has been effectively applied to a wide range of domains such as finance, econometrics, biology, security, and medicine. Many existing mining methods deal with the task of change points detection, but very few provide a flexible approach. Querying specific change points with linguistic variables is particularly useful in crime analysis, where intuitive, understandable, and appropriate detection of changes can significantly improve the allocation of resources for timely and concise operations. In this paper, we propose an on-line method for detecting and querying change points in crime-related time series with the use of a meaningful representation and a fuzzy inference system. Change points detection is based on a shape space representation, and linguistic terms describing geometric properties of the change points are used to express queries, offering the advantage of intuitiveness and flexibility. An empirical evaluation is first conducted on a crime data set to confirm the validity of the proposed method and then on a financial data set to test its general applicability. A comparison to a similar change-point detection algorithm and a sensitivity analysis are also conducted. Results show that the method is able to accurately detect change points at very low computational costs. More broadly, the detection of specific change points within time series of virtually any domain is made more intuitive and more understandable, even for experts not related to data mining.
Resumo:
We propose a probabilistic object classifier for outdoor scene analysis as a first step in solving the problem of scene context generation. The method begins with a top-down control, which uses the previously learned models (appearance and absolute location) to obtain an initial pixel-level classification. This information provides us the core of objects, which is used to acquire a more accurate object model. Therefore, their growing by specific active regions allows us to obtain an accurate recognition of known regions. Next, a stage of general segmentation provides the segmentation of unknown regions by a bottom-strategy. Finally, the last stage tries to perform a region fusion of known and unknown segmented objects. The result is both a segmentation of the image and a recognition of each segment as a given object class or as an unknown segmented object. Furthermore, experimental results are shown and evaluated to prove the validity of our proposal
Resumo:
The number of digital images has been increasing exponentially in the last few years. People have problems managing their image collections and finding a specific image. An automatic image categorization system could help them to manage images and find specific images. In this thesis, an unsupervised visual object categorization system was implemented to categorize a set of unknown images. The system is unsupervised, and hence, it does not need known images to train the system which needs to be manually obtained. Therefore, the number of possible categories and images can be huge. The system implemented in the thesis extracts local features from the images. These local features are used to build a codebook. The local features and the codebook are then used to generate a feature vector for an image. Images are categorized based on the feature vectors. The system is able to categorize any given set of images based on the visual appearance of the images. Images that have similar image regions are grouped together in the same category. Thus, for example, images which contain cars are assigned to the same cluster. The unsupervised visual object categorization system can be used in many situations, e.g., in an Internet search engine. The system can categorize images for a user, and the user can then easily find a specific type of image.
Resumo:
In this work we consider the nonlinear equivalent representation form of oscillators that exhibit nonlinearities in both the elastic and the damping terms. The nonlinear damping effects are considered to be described by fractional power velocity terms which provide better predictions of the dissipative effects observed in some physical systems. It is shown that their effects on the system dynamics response are equivalent to a shift in the coefficient of the linear damping term of a Duffing oscillator. Then, its numerical integration predictions, based on its equivalent representation form given by the well-known forced, damped Duffing equation, are compared to the numerical integration values of its original equations of motion. The applicability of the proposed procedure is evaluated by studying the dynamics response of four nonlinear oscillators that arise in some engineering applications such as nanoresonators, microresonators, human wrist movements, structural engineering design, and chain dynamics of polymeric materials at high extensibility, among others
Resumo:
Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. In this thesis, methods for fusing proprioception, force and vision together are proposed. Making assumptions of object shape and modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. The fusion of force and visual measurements makes it possible to estimate the pose of a moving target with an end-effector mounted moving camera at high rate and accuracy. The proposed approach takes the latency of the vision system into account explicitly, to provide high sample rate estimates. The estimates also allow a smooth transition from vision-based motion control to force control. The velocity of the end-effector can be controlled by estimating the distance to the target by vision and determining the velocity profile giving rapid approach and minimal force overshoot. Experiments with a 5-degree-of-freedom parallel hydraulic manipulator and a 6-degree-of-freedom serial manipulator show that integration of several sensor modalities can increase the accuracy of the measurements significantly.
Resumo:
Intervju
Resumo:
Hume's project concerning the conflict between liberty and necessity is ";reconciliatory";. But what is the nature of Hume's project? Does he solve a problem in metaphysics only? And when Hume says that the dispute between the doctrines of liberty and necessity is merely verbal, does he mean that there is no genuine metaphysical dispute between the doctrines? In the present essay I argue for: (1) there is room for liberty in Hume's philosophy, and not only because the position is pro forma compatibilist, even though this has importance for the recognition that Hume's main concern when discussing the matter is with practice; (2) the position does not involve a ";subjectivization"; of every form of necessity: it is not compatibilist because it creates a space for the claim that the operations of the will are non-problematically necessary through a weakning of the notion of necessity as it applies to external objects; (3) Hume holds that the ordinary phenomena of mental causation do not preempt the atribuition of moral responsibility, which combines perfectly with his identification of the object of moral evaluation: the whole of the character of a person, in relation to which there is, nonetheless, liberty. I intend to support my assertions by a close reading of what Hume states in section 8 of the first Enquiry.
Resumo:
This study presents the information required to describe the machine and device resources in the turret punch press environment which are needed for the development of the analysing method for automated production. The description of product and device resources and their interconnectedness is the starting point for method comparison the development of expenses, production planning and the performance of optimisation. The manufacturing method cannot be optimized unless the variables and their interdependence are known. Sheet metal parts in particular may then become remarkably complex, and their automatic manufacture may be difficult or, with some automatic equipment, even impossible if not know manufacturing properties. This thesis consists of three main elements, which constitute the triangulation. In the first phase of triangulation, the manufacture occuring on a turret punch press is examined in order to find the factors that affect the efficiency of production. In the second phase of triangulation, the manufacturability of products on turret punch presses is examined through a set of laboratory tests. The third phase oftriangulation involves an examination of five industry parts. The main key findings of this study are: all possible efficiency in high automation level machining cannot be achieved unless the raw materials used in production and the dependencies of the machine and tools are well known. Machine-specific manufacturability factors for turret punch presses were not taken into account in the industrial case samples. On the grounds of the performed tests and industrial case samples, the designer of a sheet metal product can directly influence the machining time, material loss, energy consumption and the number of tools required on a turret punch press by making decisions in the way presented in the hypothesis of thisstudy. The sheet metal parts to be produced can be optimised to bemanufactured on a turret punch press when the material to be used and the kinds of machine and tool options available are known. This provides in-depth knowledge of the machine and tool properties machine and tool-specifically. None of the optimisation starting points described here is a separate entity; instead, they are all connected to each other.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
De förändrade ansatserna inom feministisk utvecklingsekonomi för med sig nya sätt att tala om kvinnor, män och utveckling. Genom att analysera texter skrivna inom området feministisk ekonomi från 1960-talet fram till början av 2000-talet dokumenterar den föreliggande studien på vilket sätt språket hos textproducenter inom utvecklingsekonomi konstituerar och är beroende av dessa skribenters inställning till utvecklingsfrågor och till kvinnor och män. Analysen fokuserar på hur aktiverings- och passiveringsprocesser används i representationen av de två huvuddeltagarna, kvinnor och män, hur begreppet genus introduceras och hur utvecklingsfrågor förändras genom ansatser, över tid och mellan genrer. Den teoretiska ramen sträcker sig över olika discipliner: systemisk funktionell grammatik och kritisk diskursanalys, men även organisatorisk diskursanalys och utvecklingsstudier. Texterna som valts för analysen härstammar från tre olika källor: planer från världskvinnokonferenserna organiserade av Förenta Nationerna, resolutioner om kvinnor och utveckling antagna av Förenta Nationernas generalförsamling samt handlingsplaner för kvinnor och utveckling författade av Förenta Nationernas livsmedels- och jordbruksorganisation FAO. Den lingvistiska analysmetoden bygger på det system av roller och sätt att representera deltagare som utvecklats av Halliday och Van Leeuwen. För varje årtionde och varje genre granskar studien förändringarna i processtyper och deltagarroller, samt förändringen av fokus på kvinnorelaterade frågor och konceptualiseringen av genus. Den kvantitativa analysen kompletteras och förstärks av en detaljerad analys av textfragment från olika tidpunkter och ansatser. Studiens resultat är av grammatisk och lexikal natur och de är relaterade till genus, genre och tid. Studien visar att aktiveringsprocesserna är betydligt talrikare än passiveringsprocesserna i representationen av kvinnor. En bättre förståelse av deltagarrepresentation uppnås dock via en omgruppering av de grammatiska processerna i identifierande, aktiverande och riktade processer. Skiftet från fokus på kvinnor till fokus på genus är inte så mycket en förändring av processerna som representerar deltagarna, utan mer en förändring av retoriken i ansatserna och deras fokus: från integration av kvinnor till kvinnors empowerment, från kvinnors situation till genusrelationer, från brådskande tillägg till social konflikt och samarbete.
Resumo:
This paper is devoted to an analysis of some aspects of Bas van Fraassen's views on representation. While I agree with most of his claims, I disagree on the following three issues. Firstly, I contend that some isomorphism (or at least homomorphism) between the representor and what is represented is a universal necessary condition for the success of any representation, even in the case of misrepresentation. Secondly, I argue that the so-called "semantic" or "model-theoretic" construal of theories does not give proper due to the role played by true propositions in successful representing practices. Thirdly, I attempt to show that the force of van Fraassen's pragmatic - and antirealist - "dissolution" of the "loss of reality objection" loses its bite when we realize that our cognitive contact with real phenomena is achieved not by representing but by expressing true propositions about them.
Resumo:
"Helmiä sioille", pärlor för svin, säger man på finska om någonting bra och fint som tas emot av en mottagare som inte vill eller har ingen förmåga att förstå, uppskatta eller utnyttja hela den potential som finns hos det mottagna föremålet, är ointresserad av den eller gillar den inte. För sådana relativt stabila flerordiga uttryck, som är lagrade i språkbrukarnas minnen och som demonstrerar olika slags oregelbundna drag i sin struktur använder man inom lingvistiken bl.a. termerna "idiom" eller "fraseologiska enheter". Som en oregelbundenhet kan man t.ex. beskriva det faktum att betydelsen hos uttrycket inte är densamma som man skulle komma till ifall man betraktade det som en vanlig regelbunden fras. En annan oregelbundenhet, som idiomforskare har observerat, ligger i den begränsade förmågan att varieras i form och betydelse, som många idiom har jämfört med regelbundna fraser. Därför talas det ofta om "grundform" och "grundbetydelse" hos idiom och variationen avses som avvikelse från dessa. Men när man tittar på ett stort antal förekomstexempel av idiom i språkbruk, märker man att många av dem tillåter variation, t.o.m. i sådan utsträckning att gränserna mellan en variant och en "grundform" suddas ut, och istället för ett idiom råkar vi plötsligt på en "familj" av flera besläktade uttryck. Allt detta väcker frågan om hur dessa uttryck egentligen ska vara representerade i språket. I avhandlingen utförs en kritisk granskning av olika tidigare tillvägagångssätt att beskriva fraseologiska enheter i syfte att klargöra vilka svårigheter deras struktur och variation erbjuder för den lingvistiska teorin. Samtidigt presenteras ett alternativt sätt att beskriva dessa uttryck. En systematisk och formell modell som utvecklas i denna avhandling integrerar en beskrivning av idiom på många olika språkliga nivåer och skildrar deras variation i form av ett nätverk och som ett resultat av samspel mellan idiomets struktur och kontexter där det förekommer, samt av interaktion med andra fasta uttryck. Modellen bygger på en fördjupande, språkbrukbaserad analys av det finska idiomet "X HEITTÄÄ HELMIÄ SIOILLE" (X kastar pärlor för svin).