631 resultados para algorithmic skeletons


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The anion radicals CnOn-. (n = 3-6) can be generated by ionization of cyclic carbonyl compounds in the negative ion mode. The ions as well as the corresponding neutral counterparts are probed by means of different mass spectrometric techniques. The results suggest that oxocarbons, i.e. cyclic polyketones, are formed under conservation of the skeletons of the precursor molecules. At least for n = 3, however, the experimental findings indicate partial rearrangement of the expected cyclopropanetrione structure to an oxycarboxylate for the anion, i.e. O-.-C=C-CO2-. For n = 4 and 6 almost complete dissociation of the neutral polyones into carbon monoxide is found, whereas for n = 5 a distinct recovery signal indicates the generation of genuine cyclopentanepentaone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Современный этап развития комплексов автоматического управления и навигации малогабаритными БЛА многократного применения предъявляет высокие требования к автономности, точности и миниатюрности данных систем. Противоречивость требований диктует использование функционального и алгоритмического объединения нескольких разнотипных источников навигационной информации в едином вычислительном процессе на основе методов оптимальной фильтрации. Получили широкое развитие бесплатформенные инерциальные навигационные системы (БИНС) на основе комплексирования данных микромеханических датчиков инерциальной информации и датчиков параметров движения в воздушном потоке с данными спутниковых навигационных систем (СНС). Однако в современных условиях такой подход не в полной мере реализует требования к помехозащищённости, автономности и точности получаемой навигационной информации. Одновременно с этим достигли значительного прогресса навигационные системы, использующие принципы корреляционно экстремальной навигации по оптическим ориентирам и цифровым картам местности. Предлагается схема построения автономной автоматической навигационной системы (АНС) для БЛА многоразового применения на основе объединения алгоритмов БИНС, спутниковой навигационной системы и оптической навигационной системы. The modern stage of automatic control and guidance systems development for small unmanned aerial vehicles (UAV) is determined by advanced requirements for autonomy, accuracy and size of the systems. The contradictory of the requirements dictates novel functional and algorithmic tight coupling of several different onboard sensors into one computational process, which is based on methods of optimal filtering. Nowadays, data fusion of micro-electro mechanical sensors of inertial measurement units, barometric pressure sensors, and signals of global navigation satellite systems (GNSS) receivers is widely used in numerous strap down inertial navigation systems (INS). However, the systems do not fully comply with such requirements as jamming immunity, fault tolerance, autonomy, and accuracy of navigation. At the same time, the significant progress has been recently demonstrated by the navigation systems, which use the correlation extremal principle applied for optical data flow and digital maps. This article proposes a new architecture of automatic navigation management system (ANMS) for small UAV, which combines algorithms of strap down INS, satellite navigation and optical navigation system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This project characterised the bone microarchitecture of adult mice lacking the hormone, acyl ghrelin, by high resolution micro-computed tomography; and investigated the expression of the ghrelin axis in cells of human and mouse fetal cartilage. This thesis highlights for the first time the physiological role of the ghrelin axis in the bone microenvironment of aged mice. Furthermore it improves our understanding of the complex expression patterns of the ghrelin axis in cartilage cells of human and mouse fetal skeletons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ornithologists have been exploring the possibilities and the methodology of recording and archiving animal sounds for many decades. Primatologists, however, have only relatively recently become aware that recordings of primate sound may be just as valuable as traditional scientific specimens such as skins or skeletons, and should be preserved for posterity (Fig. 16.1). Audio recordings should be fully documented, archived and curated to ensure proper care and accessibility. As natural populations disappear, sound archives will become increasingly important (Bradbury et al., 1999). Studying animal vocal communication is also relevant from the perspective of behavioural ecology. Vocal communication plays a central role in animal societies. Calls are believed to provide various types and amounts of information. These may include, among other things: (1) information about the sender's identity (e.g. species, sex, age class, group membership or individual identity); (2) information about the sender's status andmood (e.g. dominance, fear or aggressive motivation, fitness); and (3) information about relevant events or discoveries in the sender's environment (e.g. predators, food location). When studying acoustic communication, sound recordings are usually required to analyse the spectral and temporal structure of vocalizations or to perform playback experiments (Chapter 11)...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The desire to solve problems caused by socket prostheses in transfemoral amputees and the acquired success of osseointegration in the dental application has led to the introduction of osseointegration in the orthopedic surgery. Since its first introduction in 1990 in Gothenburg Sweden the osseointegrated (OI) orthopedic fixation has proven several benefits[1]. The surgery consists of two surgical procedures followed by a lengthy rehabilitation program. The rehabilitation program after an OI implant includes a specific training period with a short training prosthesis. Since mechanical loading is considered to be one of the key factors that influence bone mass and the osseointegration of bone-anchored implants, the rehabilitation program will also need to include some form of load bearing exercises (LBE). To date there are two frequently used commercially available human implants. We can find proof in the literature that load bearing exercises are performed by patients with both types of OI implants. We refer to two articles, a first one written by Dr. Aschoff and all and published in 2010 in the Journal of Bone and Joint Surgery.[2] The second one presented by Hagberg et al in 2009 gives a very thorough description of the rehabilitation program of TFA fitted with an OPRA implant. The progression of the load however is determined individually according to the residual skeleton’s quality, pain level and body weight of the participant.[1] Patients are using a classical bathroom weighing scale to control the load on the implant during the course of their rehabilitation. The bathroom scale is an affordable and easy-to-use device but it has some important shortcomings. The scale provides instantaneous feedback to the patient only on the magnitude of the vertical component of the applied force. The forces and moments applied along and around the three axes of the implant are unknown. Although there are different ways to assess the load on the implant for instance through inverse dynamics in a motion analysis laboratory [3-6] this assessment is challenging. A recent proof- of-concept study by Frossard et al (2009) showed that the shortcomings of the weighing scale can be overcome by a portable kinetic system based on a commercial transducer[7].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Terra Preta is a site-specific bio-energy project which aims to create a synergy between the public and the pre-existing engineered landscape of Freshkills Park on Staten Island, New York. The project challenges traditional paradigms of public space by proposing a dynamic and ever-changing landscape. The initiative allows the publuc to self-organise the landscape and to engage in 'algorithmic processes' of growth, harvest and space creation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rise of the peer economy poses complex new regulatory challenges for policy-makers. The peer economy, typified by services like Uber and AirBnB, promises substantial productivity gains through the more efficient use of existing resources and a marked reduction in regulatory overheads. These services are rapidly disrupting existing established markets, but the regulatory trade-offs they present are difficult to evaluate. In this paper, we examine the peer economy through the context of ride-sharing and the ongoing struggle over regulatory legitimacy between the taxi industry and new entrants Uber and Lyft. We first sketch the outlines of ride-sharing as a complex regulatory problem, showing how questions of efficiency are necessarily bound up in questions about levels of service, controls over pricing, and different approaches to setting, upholding, and enforcing standards. We outline the need for data-driven policy to understand the way that algorithmic systems work and what effects these might have in the medium to long term on measures of service quality, safety, labour relations, and equality. Finally, we discuss how the competition for legitimacy is not primarily being fought on utilitarian grounds, but is instead carried out within the context of a heated ideological battle between different conceptions of the role of the state and private firms as regulators. We ultimately argue that the key to understanding these regulatory challenges is to develop better conceptual models of the governance of complex systems by private actors and the available methods the state has of influencing their actions. These struggles are not, as is often thought, struggles between regulated and unregulated systems. The key to understanding these regulatory challenges is to better understand the important regulatory work carried out by powerful, centralised private firms – both the incumbents of existing markets and the disruptive network operators in the peer-economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rise of the peer economy poses complex new regulatory challenges for policy-makers. The peer economy, typified by services like Uber and AirBnB, promises substantial productivity gains through the more efficient use of existing resources and a marked reduction in regulatory overheads. These services are rapidly disrupting existing established markets, but the regulatory trade-offs they present are difficult to evaluate. In this paper, we examine the peer economy through the context of ride-sharing and the ongoing struggle over regulatory legitimacy between the taxi industry and new entrants Uber and Lyft. We first sketch the outlines of ride-sharing as a complex regulatory problem, showing how questions of efficiency are necessarily bound up in questions about levels of service, controls over pricing, and different approaches to setting, upholding, and enforcing standards. We outline the need for data-driven policy to understand the way that algorithmic systems work and what effects these might have in the medium to long term on measures of service quality, safety, labour relations, and equality. Finally, we discuss how the competition for legitimacy is not primarily being fought on utilitarian grounds, but is instead carried out within the context of a heated ideological battle between different conceptions of the role of the state and private firms as regulators. We ultimately argue that the key to understanding these regulatory challenges is to develop better conceptual models of the governance of complex systems by private actors and the available methods the state has of influencing their actions. These struggles are not, as is often thought, struggles between regulated and unregulated systems. The key to understanding these regulatory challenges is to better understand the important regulatory work carried out by powerful, centralised private firms – both the incumbents of existing markets and the disruptive network operators in the peer-economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large multi-site image-analysis studies have successfully discovered genetic variants that affect brain structure in tens of thousands of subjects scanned worldwide. Candidate genes have also associated with brain integrity, measured using fractional anisotropy in diffusion tensor images (DTI). To evaluate the heritability and robustness of DTI measures as a target for genetic analysis, we compared 417 twins and siblings scanned on the same day on the same high field scanner (4-Tesla) with two protocols: (1) 94-directions; 2mm-thick slices, (2) 27-directions; 5mm-thickness. Using mean FA in white matter ROIs and FA skeletons derived using FSL, we (1) examined differences in voxelwise means, variances, and correlations among the measures; and (2) assessed heritability with structural equation models, using the classical twin design. FA measures from the genu of the corpus callosum were highly heritable, regardless of protocol. Genome-wide analysis of the genu mean FA revealed differences across protocols in the top associations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The correlation dimension D 2 and correlation entropy K 2 are both important quantifiers in nonlinear time series analysis. However, use of D 2 has been more common compared to K 2 as a discriminating measure. One reason for this is that D 2 is a static measure and can be easily evaluated from a time series. However, in many cases, especially those involving coloured noise, K 2 is regarded as a more useful measure. Here we present an efficient algorithmic scheme to compute K 2 directly from a time series data and show that K 2 can be used as a more effective measure compared to D 2 for analysing practical time series involving coloured noise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The prominent roles of birds, often mentioned in historical sources, are not well reflected in archaeological research. Absence or scarcity of bird bones in archaeological assemblages has been often seen as indication of a minor role of birds in the prehistoric economy or ideology, or explained by taphonomic loss. Few studies exist where birds form the basis for extensive archaeological interpretation. In this doctoral dissertation bird bone material from various Stone Age sites in the Baltic Sea region is investigated. The study period is approximately 7000-3400 BP, comprising mainly Neolithic cultures. The settlement material comes from Finland, Åland, Gotland, Saaremaa and Hiiumaa. Osteological materials are used for studying the economic and cultural importance of birds, fowling methods and principal fowling seasons. The bones were identified and earlier identifications partially checked with help of a reference material of modern skeletons. Fracture analysis was used in order to study the deposition history of bones at Ajvide settlement site. Birds in burials at two large cemeteries, Ajvide on Gotland and Zvejnieki in northern Latvia were investigated in order to study the roles of birds in burial practices. My study reveals that the economic importance of birds is at least seasonally often more prominent than usually thought, and varies greatly in different areas. Fowling has been most important in coastal areas, and especially during the breeding season. Waterbirds and grouse species were generally the most important groups in Finnish Stone Age economy. The identified species composition shows much resemblance to contemporary hunting with species such as the mallard and capercaillie commonly found. Burial materials and additional archaeological evidence from Gotland, Latvia and some other parts of northern Europe indicate that birds –e.g., jay, whooper swan, ducks – have been socially and ideologically important for the studied groups (indicating a place in the belief system, e.g. clan totemism). The burial finds indicate that some common ideas about waterbirds (perhaps as messengers or spirit helpers) might have existed in the northern European Stone Age.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis aims to link the biolinguistic research program and the results of studies in comceptual combination from cognitive psychology. The thesis derives a theory of syntactic structure of noun and adjectival compounds from the Empty Lexicon Hypothesis. Two compound-forming operations are described: root-compounding and word-compounding. The aptness of theory is tested with finnish and greek compounds. From the syntactic theory semantic requirements for conceptual system are derived, especially requirements for handling morphosyntactic features. These requirements are compared to three formidable theories of conceptual combination: relation theory CARIN, Dual-Process theory and C3-theory. The claims of explanatory power of relational distributions of modifier in CARIN-theory ared discarded, as the method for sampling and building relational distributions is not reliable and the algorithmic instantiation of theory does not compute what it claims to compute. From relational theory there still remains results supporting existence of 'easy' relations for certain concepts. Dual-Process theory is found to provide results that cannot in theory be affected by linguistic system, but the basic idea of property compounds is kept. C3-theory is found to be not computationally realistic, but the basic results of diagnosticity and local properties (domains) of conceptual system are solid. The three conceptual combination models are rethought as a problem of finding the shortest route between the two concepts. The new basis for modeling is suggested to be bare conceptual landscape with morphosyntactiic or semantic features working as guidance and structural features of landscape basically unknown, but such as they react to features from linguistic system. Minimalistic principles to conceptual modeling are suggested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tlhe well-known Cahn-lngold-Prelog method of specifying the stereoisomers is introduced within the framework of ALWIN-Algorithmic Wiswesser Notation. Given the structural diagram, the structural ALWIN is first formed; the speclflcation symbols are then introduced at the appropriate places to describe the stereoisomers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Place identification refers to the process of analyzing sensor data in order to detect places, i.e., spatial areas that are linked with activities and associated with meanings. Place information can be used, e.g., to provide awareness cues in applications that support social interactions, to provide personalized and location-sensitive information to the user, and to support mobile user studies by providing cues about the situations the study participant has encountered. Regularities in human movement patterns make it possible to detect personally meaningful places by analyzing location traces of a user. This thesis focuses on providing system level support for place identification, as well as on algorithmic issues related to the place identification process. The move from location to place requires interactions between location sensing technologies (e.g., GPS or GSM positioning), algorithms that identify places from location data and applications and services that utilize place information. These interactions can be facilitated using a mobile platform, i.e., an application or framework that runs on a mobile phone. For the purposes of this thesis, mobile platforms automate data capture and processing and provide means for disseminating data to applications and other system components. The first contribution of the thesis is BeTelGeuse, a freely available, open source mobile platform that supports multiple runtime environments. The actual place identification process can be understood as a data analysis task where the goal is to analyze (location) measurements and to identify areas that are meaningful to the user. The second contribution of the thesis is the Dirichlet Process Clustering (DPCluster) algorithm, a novel place identification algorithm. The performance of the DPCluster algorithm is evaluated using twelve different datasets that have been collected by different users, at different locations and over different periods of time. As part of the evaluation we compare the DPCluster algorithm against other state-of-the-art place identification algorithms. The results indicate that the DPCluster algorithm provides improved generalization performance against spatial and temporal variations in location measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.