935 resultados para complex data
Resumo:
This paper presents a method of formally specifying, refining and verifying concurrent systems which uses the object-oriented state-based specification language Object-Z together with the process algebra CSP. Object-Z provides a convenient way of modelling complex data structures needed to define the component processes of such systems, and CSP enables the concise specification of process interactions. The basis of the integration is a semantics of Object-Z classes identical to that of CSP processes. This allows classes specified in Object-Z to he used directly within the CSP part of the specification. In addition to specification, we also discuss refinement and verification in this model. The common semantic basis enables a unified method of refinement to be used, based upon CSP refinement. To enable state-based techniques to be used fur the Object-Z components of a specification we develop state-based refinement relations which are sound and complete with respect to CSP refinement. In addition, a verification method for static and dynamic properties is presented. The method allows us to verify properties of the CSP system specification in terms of its component Object-Z classes by using the laws of the the CSP operators together with the logic for Object-Z.
Resumo:
International Scientific Forum, ISF 2013, ISF 2013, 12-14 December 2013, Tirana.
Resumo:
A new general fitting method based on the Self-Similar (SS) organization of random sequences is presented. The proposed analytical function helps to fit the response of many complex systems when their recorded data form a self-similar curve. The verified SS principle opens new possibilities for the fitting of economical, meteorological and other complex data when the mathematical model is absent but the reduced description in terms of some universal set of the fitting parameters is necessary. This fitting function is verified on economical (price of a commodity versus time) and weather (the Earth’s mean temperature surface data versus time) and for these nontrivial cases it becomes possible to receive a very good fit of initial data set. The general conditions of application of this fitting method describing the response of many complex systems and the forecast possibilities are discussed.
Resumo:
Wireless Sensor Networks (WSNs) are increasingly used in various application domains like home-automation, agriculture, industries and infrastructure monitoring. As applications tend to leverage larger geographical deployments of sensor networks, the availability of an intuitive and user friendly programming abstraction becomes a crucial factor in enabling faster and more efficient development, and reprogramming of applications. We propose a programming pattern named sMapReduce, inspired by the Google MapReduce framework, for mapping application behaviors on to a sensor network and enabling complex data aggregation. The proposed pattern requires a user to create a network-level application in two functions: sMap and Reduce, in order to abstract away from the low-level details without sacrificing the control to develop complex logic. Such a two-fold division of programming logic is a natural-fit to typical sensor networking operation which makes sensing and topological modalities accessible to the user.
Resumo:
ABSTRACT OBJECTIVE To describe the prevalence of eating habits considered healthy in adolescents according to sex, age, education level of the mother, school type, session of study, and geographic region. METHODS The assessed data come from the Study of Cardiovascular Risks in Adolescents (ERICA), a cross-sectional, national and school-based study. Adolescents of 1,247 schools of 124 Brazilian municipalities were evaluated using a self-administered questionnaire with a section on aspects related to eating behaviors. The following eating behaviors were considered healthy: consuming breakfast, drinking water, and having meals accompanied by parents or legal guardians. All prevalence estimates were presented proportionally, with their respective 95% confidence intervals. The Chi-square test was used to evaluate the differences in healthy eating habits prevalences according to other variables. The module survey of the Stata program version 13.0 was used to analyze complex data. RESULTS We evaluated 74,589 adolescents (72.9% of the eligible students). Of these, 55.2% were female, average age being 14.6 years (SD = 1.6). Among Brazilian adolescents, approximately half of them showed healthy eating habits when consuming breakfast, drinking five or more glasses of water a day, and having meals with parents or legal guardians. All analyzed healthy eating habits showed statistically significant differences by sex, age, type of school, session of study, or geographic region. CONCLUSIONS We suggest that specific actions of intersectoral approach are implemented for the dissemination of the benefits of healthy eating habits. Older female adolescents (15 to 17 years old) who studied in public schools, resided in the Southeast region, and whose mothers had lower education levels, should be the focus of these actions since they present lower frequencies concerning the evaluated healthy habits.
Resumo:
Nowadays, road accidents are a major public health problem, which increase is forecasted if road safety is not treated properly, dying about 1.2 million people every year around the globe. In 2012, Portugal recorded 573 fatalities in road accidents, on site, revealing the largest decreasing of the European Union for 2011, along with Denmark. Beyond the impact caused by fatalities, it was calculated that the economic and social costs of road accidents weighted about 1.17% of the Portuguese gross domestic product in 2010. Visual Analytics allows the combination of data analysis techniques with interactive visualizations, which facilitates the process of knowledge discovery in sets of large and complex data, while the Geovisual Analytics facilitates the exploration of space-time data through maps with different variables and parameters that are under analysis. In Portugal, the identification of road accident accumulation zones, in this work named black spots, has been restricted to annual fixed windows. In this work, it is presented a dynamic approach based on Visual Analytics techniques that is able to identify the displacement of black spots on sliding windows of 12 months. Moreover, with the use of different parameterizations in the formula usually used to detect black spots, it is possible to identify zones that are almost becoming black spots. Through the proposed visualizations, the study and identification of countermeasures to this social and economic problem can gain new grounds and thus the decision- making process is supported and improved.
Resumo:
1. Statistical modelling is often used to relate sparse biological survey data to remotely derived environmental predictors, thereby providing a basis for predictively mapping biodiversity across an entire region of interest. The most popular strategy for such modelling has been to model distributions of individual species one at a time. Spatial modelling of biodiversity at the community level may, however, confer significant benefits for applications involving very large numbers of species, particularly if many of these species are recorded infrequently. 2. Community-level modelling combines data from multiple species and produces information on spatial pattern in the distribution of biodiversity at a collective community level instead of, or in addition to, the level of individual species. Spatial outputs from community-level modelling include predictive mapping of community types (groups of locations with similar species composition), species groups (groups of species with similar distributions), axes or gradients of compositional variation, levels of compositional dissimilarity between pairs of locations, and various macro-ecological properties (e.g. species richness). 3. Three broad modelling strategies can be used to generate these outputs: (i) 'assemble first, predict later', in which biological survey data are first classified, ordinated or aggregated to produce community-level entities or attributes that are then modelled in relation to environmental predictors; (ii) 'predict first, assemble later', in which individual species are modelled one at a time as a function of environmental variables, to produce a stack of species distribution maps that is then subjected to classification, ordination or aggregation; and (iii) 'assemble and predict together', in which all species are modelled simultaneously, within a single integrated modelling process. These strategies each have particular strengths and weaknesses, depending on the intended purpose of modelling and the type, quality and quantity of data involved. 4. Synthesis and applications. The potential benefits of modelling large multispecies data sets using community-level, as opposed to species-level, approaches include faster processing, increased power to detect shared patterns of environmental response across rarely recorded species, and enhanced capacity to synthesize complex data into a form more readily interpretable by scientists and decision-makers. Community-level modelling therefore deserves to be considered more often, and more widely, as a potential alternative or supplement to modelling individual species.
Resumo:
In colonies of social Hymenoptera (which include all ants, as well as some wasp and bee species), only queens reproduce whereas workers generally perform other tasks. The evolution of worker's reproductive altruism can be explained by kin selection, which states that workers can indirectly transmit copies of their genes by helping the reproduction of relatives. The relatedness between queens and workers may however be low, particularly when there are multiple queens per colony, which limits the transmission of copies of workers genes and increases potential conflicts between colony members. In this thesis, we investigated the link between social structure variations and conflicts, and explored the mechanisms involved in variation of colony queen number in ants. According to kin selection, workers should rear the brood they are most related to. In social Hymenoptera, males are haploid whereas females (workers and queens) are diploid. As a result, workers can be up to three times more related to females than males in some colonies, where they should consequently favour the production of females. In contrast, queens are equally related to daughters and sons in all types of colonies and therefore should favour a balanced sex ratio. In a meta-analysis across all studies of social Hymenoptera, we showed that colony sex ratio is generally largely influenced by workers. Hence, the evolution of social structures where queens and workers are equally related to males and females may contribute to decrease the conflict between the two castes over colony sex ratio. Another conflict between queens and workers can occur over male production. Many species contain workers that still have the ability to lay haploid eggs. In some social structures, workers are on average more related to sons of queens than to sons of other workers. As a result, workers should eliminate worker-laid eggs to favour queen-laid eggs. We showed that in the ant Formica selysi, workers eliminate more worker-laid than queen-laid eggs, independently of colony social structure. These results therefore suggest that worker policing can evolve independently from relatedness, potentially because of costs of worker reproduction at the colony-level. Colony queen number is a key parameter that influences relatedness between group members. Queen body size is generally linked to the success of independent colony foundation by single queens and may influence the number of queens in the new colony. In the ant F. selysi, single-queen colonies produce larger queens than multiple-queen colonies. We showed that this association results from genes or maternal effects transmitted to the eggs. However, we also found that queens produced in colonies of the two social forms did not differ in their general ability to found new colonies independently. Queen body size may also influence queen dispersal ability and constrain small queens to be re-adopted in their original nest after mating at proximity. We tested the acceptance of new queens in another ant species, Formica paralugubris, which has numerous queens per colony. Our results show that workers do not discriminate between nestmate and foreign queens, and more generally accept new queens at a limited rate. To conclude, this thesis shows that mechanisms influencing variation in colony queen number and the influence of these changes on conflict resolution are complex. Data gathered in this thesis therefore constitute a solid background for further research on the evolution and the maintenance of complex organisations in insect societies.
Resumo:
We present an open-source ITK implementation of a directFourier method for tomographic reconstruction, applicableto parallel-beam x-ray images. Direct Fourierreconstruction makes use of the central-slice theorem tobuild a polar 2D Fourier space from the 1D transformedprojections of the scanned object, that is resampled intoa Cartesian grid. Inverse 2D Fourier transform eventuallyyields the reconstructed image. Additionally, we providea complex wrapper to the BSplineInterpolateImageFunctionto overcome ITKâeuro?s current lack for image interpolatorsdealing with complex data types. A sample application ispresented and extensively illustrated on the Shepp-Loganhead phantom. We show that appropriate input zeropaddingand 2D-DFT oversampling rates together with radial cubicb-spline interpolation improve 2D-DFT interpolationquality and are efficient remedies to reducereconstruction artifacts.
Resumo:
A previous study sponsored by the Smart Work Zone Deployment Initiative, “Feasibility of Visualization and Simulation Applications to Improve Work Zone Safety and Mobility,” demonstrated the feasibility of combining readily available, inexpensive software programs, such as SketchUp and Google Earth, with standard two-dimensional civil engineering design programs, such as MicroStation, to create animations of construction work zones. The animations reflect changes in work zone configurations as the project progresses, representing an opportunity to visually present complex information to drivers, construction workers, agency personnel, and the general public. The purpose of this study is to continue the work from the previous study to determine the added value and resource demands created by including more complex data, specifically traffic volume, movement, and vehicle type. This report describes the changes that were made to the simulation, including incorporating additional data and converting the simulation from a desktop application to a web application.
Resumo:
In this paper we present the ViRVIG Institute, a recently created institution that joins two well-known research groups: MOVING in Barcelona, and GGG in Girona. Our main research topics are Virtual Reality devices and interaction techniques, complex data models, realistic materials and lighting, geometry processing, and medical image visualization. We briefly introduce the history of both research groups and present some representative projects. Finally, we sketch our lines for future research
Resumo:
Two likelihood ratio (LR) approaches are presented to evaluate the strength of evidence of MDMA tablet comparisons. The first one is based on a more 'traditional' comparison of MDMA tablets by using distance measures (e.g., Pearson correlation distance or a Euclidean distance). In this approach, LRs are calculated using the distribution of distances between tablets of the same-batch and that of different-batches. The second approach is based on methods used in some other fields of forensic comparison. Here LRs are calculated based on the distribution of values of MDMA tablet characteristics within a specific batch and from all batches. The data used in this paper must be seen as examples to illustrate both methods. In future research the methods can be applied to other and more complex data. In this paper, the methods and their results are discussed, considering their performance in evidence evaluation and several practical aspects. With respect to evidence in favor of the correct hypothesis, the second method proved to be better than the first one. It is shown that the LRs in same-batch comparisons are generally higher compared to the first method and the LRs in different-batch comparisons are generally lower. On the other hand, for operational purposes (where quick information is needed), the first method may be preferred, because it is less time consuming. With this method a model has to be estimated only once in a while, which means that only a few measurements have to be done, while with the second method more measurements are needed because each time a new model has to be estimated.
Resumo:
Diffusion MRI has evolved towards an important clinical diagnostic and research tool. Though clinical routine is using mainly diffusion weighted and tensor imaging approaches, Q-ball imaging and diffusion spectrum imaging techniques have become more widely available. They are frequently used in research-oriented investigations in particular those aiming at measuring brain network connectivity. In this work, we aim at assessing the dependency of connectivity measurements on various diffusion encoding schemes in combination with appropriate data modeling. We process and compare the structural connection matrices computed from several diffusion encoding schemes, including diffusion tensor imaging, q-ball imaging and high angular resolution schemes, such as diffusion spectrum imaging with a publically available processing pipeline for data reconstruction, tracking and visualization of diffusion MR imaging. The results indicate that the high angular resolution schemes maximize the number of obtained connections when applying identical processing strategies to the different diffusion schemes. Compared to the conventional diffusion tensor imaging, the added connectivity is mainly found for pathways in the 50-100mm range, corresponding to neighboring association fibers and long-range associative, striatal and commissural fiber pathways. The analysis of the major associative fiber tracts of the brain reveals striking differences between the applied diffusion schemes. More complex data modeling techniques (beyond tensor model) are recommended 1) if the tracts of interest run through large fiber crossings such as the centrum semi-ovale, or 2) if non-dominant fiber populations, e.g. the neighboring association fibers are the subject of investigation. An important finding of the study is that since the ground truth sensitivity and specificity is not known, the comparability between results arising from different strategies in data reconstruction and/or tracking becomes implausible to understand.
Resumo:
Due to the large number of characteristics, there is a need to extract the most relevant characteristicsfrom the input data, so that the amount of information lost in this way is minimal, and the classification realized with the projected data set is relevant with respect to the original data. In order to achieve this feature extraction, different statistical techniques, as well as the principal components analysis (PCA) may be used. This thesis describes an extension of principal components analysis (PCA) allowing the extraction ofa finite number of relevant features from high-dimensional fuzzy data and noisy data. PCA finds linear combinations of the original measurement variables that describe the significant variation in the data. The comparisonof the two proposed methods was produced by using postoperative patient data. Experiment results demonstrate the ability of using the proposed two methods in complex data. Fuzzy PCA was used in the classificationproblem. The classification was applied by using the similarity classifier algorithm where total similarity measures weights are optimized with differential evolution algorithm. This thesis presents the comparison of the classification results based on the obtained data from the fuzzy PCA.
Resumo:
Tutkielmassa on tavoitteena selvittää raportoinnin tarjoamiamandollisuuksia yrityksen ohjauksen, ja johdon päätöksenteon helpottamiseksi. Teoriaosuudessa käydään läpi tiedon tarpeen määrittelyä erityisesti yritysjohdon näkökulmasta. Millaista tietoa yrityksen toiminnasta pitäisi tietojärjestelmiin kerätä, jotta sillä olisi todellista merkitystä. Toisinaan tietoa tarvitaan myösyrityksen ulkopuolella tapahtuvista asioista, ja myös tällaisen tiedon käsittelyn pitäisi olla mandollista yrityksen tietojärjestelmissä. Tämä luonnollisesti asettaa melkoisia vaatimuksia tietojärjestelmille. Niihin liittyen on jonkin verran esitelty teknisiin asioihin liittyviä tekijöitä. Tekniikkaa kuitenkin on olemassa hyvinkin monipuolisen tiedon käsittelyä varten. Hankalampaa on määrittää, se millä tiedolla on oikeasti merkitystä. Suuren tietomäärän tiivistämiseen ja Asentamiseen on niin ikään olemassa keinoja, joita esitellään yleisellä tasolla, keskittymättä mihinkään yksittäiseen malliin. Ajatuksena on lähinnä ollut se, että jokaisen yrityksen kannattaa miettiä omista lähtökohdistaanitselleen sopivin tapa. Kaikille ei välttämättä sovi sama kaavamainen malli, jatoisaalta erilaisten mittarimallien keskinäiset erotkin ovat hyvin pieniä. Periaatteessa kaikissa malleissa pyritään lähtökohtaisesti siihen, että yrityksestä saadaan mandollisimman kokonaisvaltainen kuva. Yrityksen toiminnan kannalta päätetään ne tekijät, jotka eniten vaikuttavat sen menestykseen jatkossa, ja tältä pohjalta myös löytyvät tärkeimmät seurattavat asiat. Lopuksi on lyhyesti kuvattu case-yrityksen toimintaa ja sen käyttämiä tieto-järjestelmiä.Tutkielmassa on myös analysoitu, mitä kaikkea tietoa yrityksen toiminnasta tarvittaisiin, ja mistä olemassa olevasta järjestelmästä se on mandollista saada. Mikäli tietoa ei löytynyt järjestelmistä, on kerrottu, miten asia on hoidettu raportoinnin kannalta, ja miten raportointi kokonaisuudessaan yritykselle rakennettiin tutkielman kuluessa.