960 resultados para irregular singularity
Resumo:
As the technologies for the fabrication of high quality microarray advances rapidly, quantification of microarray data becomes a major task. Gridding is the first step in the analysis of microarray images for locating the subarrays and individual spots within each subarray. For accurate gridding of high-density microarray images, in the presence of contamination and background noise, precise calculation of parameters is essential. This paper presents an accurate fully automatic gridding method for locating suarrays and individual spots using the intensity projection profile of the most suitable subimage. The method is capable of processing the image without any user intervention and does not demand any input parameters as many other commercial and academic packages. According to results obtained, the accuracy of our algorithm is between 95-100% for microarray images with coefficient of variation less than two. Experimental results show that the method is capable of gridding microarray images with irregular spots, varying surface intensity distribution and with more than 50% contamination
Resumo:
Fingerprint based authentication systems are one of the cost-effective biometric authentication techniques employed for personal identification. As the data base population increases, fast identification/recognition algorithms are required with high accuracy. Accuracy can be increased using multimodal evidences collected by multiple biometric traits. In this work, consecutive fingerprint images are taken, global singularities are located using directional field strength and their local orientation vector is formulated with respect to the base line of the finger. Feature level fusion is carried out and a 32 element feature template is obtained. A matching score is formulated for the identification and 100% accuracy was obtained for a database of 300 persons. The polygonal feature vector helps to reduce the size of the feature database from the present 70-100 minutiae features to just 32 features and also a lower matching threshold can be fixed compared to single finger based identification
Resumo:
Marine product export does something pivotal in the fish export economy of Kerala. The post WTO period has witnessed a strengthening of food safety and quality standards applied on food products in the developed countries. In the case of the primary importers, like the EU, the US and Japan, market actions will have far reaching reverberations and implications for the marine product exports from developing nations. The article focuses on Kerala’s marine product exports that had been targeting the markets of the EU, the US and Japan, and the concomitant shift in markets owing to the stringent stipulations under the WTO regime. Despite the overwhelming importance of the EU in the marine product exports of the state, the pronounced influence of irregular components on the quantity and value of marine product exports to the EU in the post WTO period raises concern. However, the tendencies of market diversification validated by the forecast generated for the emerging markets of the SEA, the MEA and others, to an extent, allay the pressures on the marine product export sector of the state which had hitherto relied heavily on the markets of the EU, the US and Japan
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
One of the interesting consequences of Einstein's General Theory of Relativity is the black hole solutions. Until the observation made by Hawking in 1970s, it was believed that black holes are perfectly black. The General Theory of Relativity says that black holes are objects which absorb both matter and radiation crossing the event horizon. The event horizon is a surface through which even light is not able to escape. It acts as a one sided membrane that allows the passage of particles only in one direction i.e. towards the center of black holes. All the particles that are absorbed by black hole increases the mass of the black hole and thus the size of event horizon also increases. Hawking showed in 1970s that when applying quantum mechanical laws to black holes they are not perfectly black but they can emit radiation. Thus the black hole can have temperature known as Hawking temperature. In the thesis we have studied some aspects of black holes in f(R) theory of gravity and Einstein's General Theory of Relativity. The scattering of scalar field in this background space time studied in the first chapter shows that the extended black hole will scatter scalar waves and have a scattering cross section and applying tunneling mechanism we have obtained the Hawking temperature of this black hole. In the following chapter we have investigated the quasinormal properties of the extended black hole. We have studied the electromagnetic and scalar perturbations in this space-time and find that the black hole frequencies are complex and show exponential damping indicating the black hole is stable against the perturbations. In the present study we show that not only the black holes exist in modified gravities but also they have similar properties of black hole space times in General Theory of Relativity. 2 + 1 black holes or three dimensional black holes are simplified examples of more complicated four dimensional black holes. Thus these models of black holes are known as toy models of black holes in four dimensional black holes in General theory of Relativity. We have studied some properties of these types of black holes in Einstein model (General Theory of Relativity). A three dimensional black hole known as MSW is taken for our study. The thermodynamics and spectroscopy of MSW black hole are studied and obtained the area spectrum which is equispaced and different thermo dynamical properties are studied. The Dirac perturbation of this three dimensional black hole is studied and the resulting quasinormal spectrum of this three dimensional black hole is obtained. The different quasinormal frequencies are tabulated in tables and these values show an exponential damping of oscillations indicating the black hole is stable against the mass less Dirac perturbation. In General Theory of Relativity almost all solutions contain singularities. The cosmological solution and different black hole solutions of Einstein's field equation contain singularities. The regular black hole solutions are those which are solutions of Einstein's equation and have no singularity at the origin. These solutions possess event horizon but have no central singularity. Such a solution was first put forward by Bardeen. Hayward proposed a similar regular black hole solution. We have studied the thermodynamics and spectroscopy of Hay-ward regular black holes. We have also obtained the different thermodynamic properties and the area spectrum. The area spectrum is a function of the horizon radius. The entropy-heat capacity curve has a discontinuity at some value of entropy showing a phase transition.
Resumo:
Heterochromatin Protein 1 (HP1) is an evolutionarily conserved protein required for formation of a higher-order chromatin structures and epigenetic gene silencing. The objective of the present work was to functionally characterise HP1-like proteins in Dictyostelium discoideum, and to investigate their function in heterochromatin formation and transcriptional gene silencing. The Dictyostelium genome encodes three HP1-like proteins (hcpA, hcpB, hcpC), from which only two, hcpA and hcpB, but not hcpC were found to be expressed during vegetative growth and under developmental conditions. Therefore, hcpC, albeit no obvious pseudogene, was excluded from this study. Both HcpA and HcpB show the characteristic conserved domain structure of HP1 proteins, consisting of an N-terminal chromo domain and a C-terminal chromo shadow domain, which are separated by a hinge. Both proteins show all biochemical activities characteristic for HP1 proteins, such as homo- and heterodimerisation in vitro and in vivo, and DNA binding activtity. HcpA furthermore seems to bind to K9-methylated histone H3 in vitro. The proteins thus appear to be structurally and functionally conserved in Dictyostelium. The proteins display largely identical subnuclear distribution in several minor foci and concentration in one major cluster at the nuclear periphery. The localisation of this cluster adjacent to the nucleus-associated centrosome and its mitotic behaviour strongly suggest that it represents centromeric heterochromatin. Furthermore, it is characterised by histone H3 lysine-9 dimethylation (H3K9me2), which is another hallmark of Dictyostelium heterochromatin. Therefore, one important aspect of the work was to characterise the so-far largely unknown structural organisation of centromeric heterochromatin. The Dictyostelium homologue of inner centromere protein INCENP (DdINCENP), co-localized with both HcpA and H3K9me2 during metaphase, providing further evidence that H3K9me2 and HcpA/B localisation represent centromeric heterochromatin. Chromatin immunoprecipitation (ChIP) showed that two types of high-copy number retrotransposons (DIRS-1 and skipper), which form large irregular arrays at the chromosome ends, which are thought to contain the Dictyostelium centromeres, are characterised by H3K9me2. Neither overexpression of full-length HcpA or HcpB, nor deletion of single Hcp isoforms resulted in changes in retrotransposon transcript levels. However, overexpression of a C-terminally truncated HcpA protein, assumed to display a dominant negative effect, lead to an increase in skipper retrotransposon transcript levels. Furthermore, overexpression of this protein lead to severe growth defects in axenic suspension culture and reduced cell viability. In order to elucidate the proteins functions in centromeric heterochromatin formation, gene knock-outs for both hcpA and hcpB were generated. Both genes could be successfully targeted and disrupted by homologous recombination. Surprisingly, the degree of functional redundancy of the two isoforms was, although not unexpected, very high. Both single knock-out mutants did not show any obvious phenotypes under standard laboratory conditions and only deletion of hcpA resulted in subtle growth phenotypes when grown at low temperature. All attempts to generate a double null mutant failed. However, both endogenous genes could be disrupted in cells in which a rescue construct that ectopically expressed one of the isoforms either with N-terminal 6xHis- or GFP-tag had been introduced. The data imply that the presence of at least one Hcp isoform is essential in Dictyostelium. The lethality of the hcpA/hcpB double mutant thus greatly hampered functional analysis of the two genes. However, the experiment provided genetic evidence that the GFP-HcpA fusion protein, because of its ability to compensate the loss of the endogenous HcpA protein, was a functional protein. The proteins displayed quantitative differences in dimerisation behaviour, which are conferred by the slightly different hinge and chromo shadow domains at the C-termini. Dimerisation preferences in increasing order were HcpA-HcpA << HcpA-HcpB << HcpB-HcpB. Overexpression of GFP-HcpA or a chimeric protein containing the HcpA C-terminus (GFP-HcpBNAC), but not overexpression of GFP-HcpB or GFP-HcpANBC, lead to increased frequencies of anaphase bridges in late mitotic cells, which are thought to be caused by telomere-telomere fusions. Chromatin targeting of the two proteins is achieved by at least two distinct mechanisms. The N-terminal chromo domain and hinge of the proteins are required for targeting to centromeric heterochromatin, while the C-terminal portion encoding the CSD is required for targeting to several other chromatin regions at the nuclear periphery that are characterised by H3K9me2. Targeting to centromeric heterochromatin likely involves direct binding to DNA. The Dictyostelium genome encodes for all subunits of the origin recognition complex (ORC), which is a possible upstream component of HP1 targeting to chromatin. Overexpression of GFP-tagged OrcB, the Dictyostelium Orc2 homologue, showed a distinct nuclear localisation that partially overlapped with the HcpA distribution. Furthermore, GFP-OrcB localized to the centrosome during the entire cell cycle, indicating an involvement in centrosome function. DnmA is the sole DNA methyltransferase in Dictyostelium required for all DNA(cytosine-)methylation. To test for its in vivo activity, two different cell lines were established that ectopically expressed DnmA-myc or DnmA-GFP. It was assumed that overexpression of these proteins might cause an increase in the 5-methyl-cytosine(5-mC)-levels in the genomic DNA due to genomic hypermethylation. Although DnmA-GFP showed preferential localisation in the nucleus, no changes in the 5-mC-levels in the genomic DNA could be detected by capillary electrophoresis.
Resumo:
The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.
Resumo:
Die fliegerische Tätigkeit auf der Kurzstrecke in der zivilen Luftfahrt unterliegt arbeitsspezifischen Belastungsfaktoren, die sich in wesentlichen Punkten von denen auf der Langstrecke unterscheiden. Eine hohe Arbeitsbelastung auf der Kurzstrecke ist mit vielen Starts und Landungen am Tag verbunden. Neben der Anzahl der Flugabschnitte können auch lange Flugdienstzeiten und/oder unregelmäßige Arbeitszeiten sowie der Zeitdruck während der Einsätze auf der Kurzstrecke zur Belastung für Cockpitbesatzungsmitglieder werden und zu Ermüdungserscheinungen führen. Bisher wurden flugmedizinische und -psychologische Daten hauptsächlich auf der Langstrecke in Bezug auf die Auswirkungen der Jet-Leg Symptomatik und kaum auf der Kurzstrecke erhoben. Deshalb wurde im Rahmen des DLR- Projekts „Untersuchungen zu kumulativen psychischen und physiologischen Effekten des fliegenden Personals auf der Kurzstrecke“ eine Langzeituntersuchung zur Belastung/Beanspruchung, Ermüdung sowie Erholung des Cockpitpersonals auf der Kurzstrecke über jeweils 56 Tage durchgeführt. In Zusammenarbeit mit der Deutschen Lufthansa AG dauerte die Untersuchung zu den Auswirkungen arbeitsspezifischer Belastungsfaktoren auf die Cockpitbesatzungsmitglieder der Boeing 737-Flotte von 2003 bis 2006. ZIEL: Unter Berücksichtigung theoretisch fundierter arbeitspsychologischer Konzepte war das Ziel der Studie, kumulative und akute Effekte auf das Schlaf-Wach-Verhalten, auf die Belastung/Beanspruchung sowie auf die Müdigkeit zu identifizieren, die durch aufeinander folgende Einsätze auf der Kurzstrecke innerhalb eines Zeitraums von acht Wochen auftreten können. Hierfür wurden Daten von 29 Piloten (N=13 Kapitäne; N=16 Erste Offiziere) aufgezeichnet. Das Durchschnittsalter lag bei 33,8 ± 7,9 Jahren (Kapitäne: 42,0 ± 3,8 Jahre; Erste Offiziere: 27,4 ± 2,2 Jahre). METHODEN: Über ein Handheld PC konnten effizient Fragebögen bearbeitet und das Sleep Log sowie das Flight Log geführt werden. Die subjektive Ermüdung und Arbeitsbeanspruchung wurden durch standardisierte Fragebögen (z.B. Ermüdungsskala von Samn & Perelli (1982), NASA-TLX) operationalisiert. Im Sleep Log und im Flight Log wurden das Schlaf-Wach-Verhalten sowie flugspezifische Daten dokumentiert (z.B. Dienstbeginn, Dienstende, Flugabschnitte, Zielorte, etc.). Der Schlaf-Wach-Zyklus wurde mittels der Aktimetrie während des gesamten Messverlaufs aufgezeichnet. Die objektive Leistungsfähigkeit wurde täglich morgens und abends mit Hilfe einer computergestützten Psychomotor Vigilance Task (PVT) nach Dinges & Powell (1985) erfasst. Die Leistung in der PVT diente als Indikator für die Ermüdung eines Piloten. Zusätzliche Befragungen mit Paper-Pencil-Fragebögen sollten Aufschluss über relevante, psychosoziale Randbedingungen geben, die bei den täglichen Erhebungen nicht berücksichtigt wurden (z.B. Arbeitszufriedenheit; Essgewohnheiten; Kollegenbeziehungen). ERGEBNISSE: Unter Beachtung kumulativer Effekte wurde über die Studiendauer keine Veränderung in der Schlafqualität und im Schlafbedürfnis festgestellt. Die Müdigkeit nahm dagegen während der achtwöchigen Untersuchung zu. Die Reaktionszeit in der PVT zeigte an Flugdiensttagen eine Verschlechterung über die Zeit. Insgesamt wurden keine kritischen längerfristigen Effekte analysiert. Akute signifikante Effekte wurden bei der Ermüdung, der Gesamtbelastung und der Leistungsfähigkeit an Flugdiensttagen gefunden. Die Ermüdung als auch die Gesamtbelastung stiegen bei zunehmender Flugdienstdauer und Leganzahl und die Leistung nahm in der PVT ab. Der „time on task“ Effekt zeigte sich besonders in der Ermüdung durch die fliegerische Tätigkeit ab einer Flugdienstzeit von > 10 Stunden und > 4 Legs pro Tag. SCHLUSSFOLGERUNG: Mit diesen Ergebnissen konnte eine wissenschaftliche Datenbasis geschaffen werden aus der Empfehlungen resultieren, wie die Einsatzplanung für das Cockpitpersonal auf der Kurzstrecke unter flugmedizinischen und flugpsychologischen Gesichtspunkten optimiert werden kann. Zudem kann ein sachgerechter Beitrag im Rahmen der Diskussion zur Flugdienst- und Ruhezeitenregelung auf europäischer Ebene geleistet werden.
Resumo:
Pastoralism and ranching are two different rangeland-based livestock systems in dryland areas of East Africa. Both usually operate under low and irregular rainfall and consequently low overall primary biomass production of high spatial and temporal heterogeneity. Both are usually located far from town centres, market outlets and communication, medical, educational, banking, insurance and other infrastructure. Whereas pastoralists can be regarded as self-employed, gaining their livelihood from managing their individually owned livestock on communal land, ranches mostly employ herders as wage labourers to manage the livestock owned by the ranch on the ranches’ own land property. Both production systems can be similarly labour intensive and – with regard to the livestock management – require the same type of work, whether carried out as self-employed pastoralist or as employed herder on a work contract. Given this similarity, the aim of this study was to comparatively assess how pastoralists and employed herders in northern Kenya view their working conditions, and which criteria they use to assess hardship and rewards in their daily work and their working life. Their own perception is compared with the concept of Decent Work developed by the International Labour Organisation (ILO). Samburu pastoralists in Marsabit and Samburu Districts as well as herders on ranches in Laikipia District were interviewed. A qualitative analysis of 47 semi-structured interviews yielded information about daily activities, income, free time, education and social security. Five out of 22 open interviews with pastoralists and seven out of 13 open interviews with employed herders fully transcribed and subjected to qualitative content analysis to yield life stories of 12 informants. Pastoralists consider it important to have healthy and satisfied animals. The ability to provide food for their family especially for the children has a high priority. Hardships for the pastoralists are, if activities are exhausting, and challenging, and dangerous. For employed herders, decent conditions are if their wages are high enough to be able to provide food for their family and formal education for their children. It is further most important for them to do work they are experienced and skilled in. Most employed herders were former pastoralists, who had lost their animals due to drought or raids. There are parallels between the ILO ‘Decent Work’ concept and the perception of working conditions of pastoralists and employed herders. These are, for example, that remuneration is of importance and the appreciation by either the employer or the community is desired. Some aspects that are seen as important by the ILO such as safety at work and healthy working conditions only play a secondary role to the pastoralists, who see risky and dangerous tasks as inherent characteristics of their efforts to gain a livelihood in their living environment.
Resumo:
A closed-form solution formula for the kinematic control of manipulators with redundancy is derived, using the Lagrangian multiplier method. Differential relationship equivalent to the Resolved Motion Method has been also derived. The proposed method is proved to provide with the exact equilibrium state for the Resolved Motion Method. This exactness in the proposed method fixes the repeatability problem in the Resolved Motion Method, and establishes a fixed transformation from workspace to the joint space. Also the method, owing to the exactness, is demonstrated to give more accurate trajectories than the Resolved Motion Method. In addition, a new performance measure for redundancy control has been developed. This measure, if used with kinematic control methods, helps achieve dexterous movements including singularity avoidance. Compared to other measures such as the manipulability measure and the condition number, this measure tends to give superior performances in terms of preserving the repeatability property and providing with smoother joint velocity trajectories. Using the fixed transformation property, Taylor's Bounded Deviation Paths Algorithm has been extended to the redundant manipulators.
Resumo:
We propose to analyze shapes as “compositions” of distances in Aitchison geometry as an alternate and complementary tool to classical shape analysis, especially when size is non-informative. Shapes are typically described by the location of user-chosen landmarks. However the shape – considered as invariant under scaling, translation, mirroring and rotation – does not uniquely define the location of landmarks. A simple approach is to use distances of landmarks instead of the locations of landmarks them self. Distances are positive numbers defined up to joint scaling, a mathematical structure quite similar to compositions. The shape fixes only ratios of distances. Perturbations correspond to relative changes of the size of subshapes and of aspect ratios. The power transform increases the expression of the shape by increasing distance ratios. In analogy to the subcompositional consistency, results should not depend too much on the choice of distances, because different subsets of the pairwise distances of landmarks uniquely define the shape. Various compositional analysis tools can be applied to sets of distances directly or after minor modifications concerning the singularity of the covariance matrix and yield results with direct interpretations in terms of shape changes. The remaining problem is that not all sets of distances correspond to a valid shape. Nevertheless interpolated or predicted shapes can be backtransformated by multidimensional scaling (when all pairwise distances are used) or free geodetic adjustment (when sufficiently many distances are used)
Resumo:
The R-package “compositions”is a tool for advanced compositional analysis. Its basic functionality has seen some conceptual improvement, containing now some facilities to work with and represent ilr bases built from balances, and an elaborated subsys- tem for dealing with several kinds of irregular data: (rounded or structural) zeroes, incomplete observations and outliers. The general approach to these irregularities is based on subcompositions: for an irregular datum, one can distinguish a “regular” sub- composition (where all parts are actually observed and the datum behaves typically) and a “problematic” subcomposition (with those unobserved, zero or rounded parts, or else where the datum shows an erratic or atypical behaviour). Systematic classification schemes are proposed for both outliers and missing values (including zeros) focusing on the nature of irregularities in the datum subcomposition(s). To compute statistics with values missing at random and structural zeros, a projection approach is implemented: a given datum contributes to the estimation of the desired parameters only on the subcompositon where it was observed. For data sets with values below the detection limit, two different approaches are provided: the well-known imputation technique, and also the projection approach. To compute statistics in the presence of outliers, robust statistics are adapted to the characteristics of compositional data, based on the minimum covariance determinant approach. The outlier classification is based on four different models of outlier occur- rence and Monte-Carlo-based tests for their characterization. Furthermore the package provides special plots helping to understand the nature of outliers in the dataset. Keywords: coda-dendrogram, lost values, MAR, missing data, MCD estimator, robustness, rounded zeros
Resumo:
Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr) transformation to obtain the random vector y of dimension D. The factor model is then y = Λf + e (1) with the factors f of dimension k < D, the error term e, and the loadings matrix Λ. Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysis model (1) can be written as Cov(y) = ΛΛT + ψ (2) where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as the loadings matrix Λ are estimated from an estimation of Cov(y). Given observed clr transformed data Y as realizations of the random vector y. Outliers or deviations from the idealized model assumptions of factor analysis can severely effect the parameter estimation. As a way out, robust estimation of the covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), see Pison et al. (2003). Well known robust covariance estimators with good statistical properties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), rely on a full-rank data matrix Y which is not the case for clr transformed data (see, e.g., Aitchison, 1986). The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves this singularity problem. The data matrix Y is transformed to a matrix Z by using an orthonormal basis of lower dimension. Using the ilr transformed data, a robust covariance matrix C(Z) can be estimated. The result can be back-transformed to the clr space by C(Y ) = V C(Z)V T where the matrix V with orthonormal columns comes from the relation between the clr and the ilr transformation. Now the parameters in the model (2) can be estimated (Basilevsky, 1994) and the results have a direct interpretation since the links to the original variables are still preserved. The above procedure will be applied to data from geochemistry. Our special interest is on comparing the results with those of Reimann et al. (2002) for the Kola project data
Resumo:
Dar una visión de la Sociedad de Amigos del País de Málaga y las actividades que llevan a cabo durante los años 1906-1926 y su repercusión en la vida social y cultural. Elaborar un marco referencial histórico-pedagógico y descubrir si esta sociedad hizo de la educación el instrumento de la reforma para solucionar los males del país. Se aborda y se analiza la sociedad española del siglo XVIII, que hace que nazcan las Sociedades Económicas de Amigos del País. Elaborar un marco histórico-pedagógico en la sociedad malagueña de esta época y la necesidad de crear una sociedad económica. Documentos de archivo, legajos, boletínes, actas y otros documentos escritos. Deducción y análisis de las estructuras y de la organización del tema objeto de estudio. El estudio de las Sociedades Económicas de Amigos del País han permitido el conocimiento de una serie de hechos: su nacimiento surge como portavoz a las ideas del gobierno; su finalidad es el desarrollo de la agricultura, comercio y la insustria, así como el fomento de las ideas ilustradas; su trayectoria fue irregular, ya que varias veces desaparecen para aparecer más tarde; y la finaciación era a través de cuotas de los socios. En Málaga, nace como iniciativa gubernamental con las características ya descritas. Se dan clases gratuitas y se establece un cuadro de asignaturas, profesorado, etc. Realizan actividades culturales para el beneficio de la juventud malagueña y destacó la figura de D. Pedro Gómez Chaix, autor de la construcción del barrio obrero América, así como del Ateneo Comercial, la Biblioteca Popular, etc..
Resumo:
Contexto histórico-político y socio-cultural en el que se produce la creación y desarrollo de los centros de formación del profesorado de Educación Física. Descripción de instituciones que se consideran antecedentes, y estudio de las que se han dedicado a esta formación desde 1805. Análisis de la incorporación de los currícula a la escuela. Análisis y valoración de la legislación referida a la formación del profesorado de Educación Física. Consideración de las expectativas de futuro y propuesta de modelo. En la investigación histórica se ha utilizado el método analítico y el método dialéctico, también se ha utilizado la investigación descriptiva, y métodos de análisis de documentos. Se ha recurrido a fuentes primarias, secundarias y archivos. Técnicas de análisis de contenido con unidades de base no gramáticas y análisis por documentos íntegros. La formación del profesorado se plantea en España en el primer tercio del siglo XIX. Desde el principio la formación de profesores se ha realizado: Escuela Primaria en las Escuelas Normales y Escuela Secundaria y Superior en Facultades Superiores. La formación del profesorado en Educación Física se institucionaliza en 1883 con la apertura de la 'Escuela Central de Gimnástica'. La Educación Física aparece de modo irregular e intermitente en los planes de estudios del siglo XIX y, siempre en la Segunda Enseñanza. Hasta avanzado el siglo XX no se incluirá en la Secundaria. En la universidad se ha primado. La situación académica, profesional y laboral del profesorado de Educación Física comienza a resolverse con la convocatoria de oposiciones al cuerpo de profesores agregados de instituto y profesores numerarios de Formación Profesional, 1985. Debe tenderse a que el profesorado realice estudios superiores. El profesor de Educación Física debe recibir una profunda formación psicopedagógica.