917 resultados para structuration of lexical data bases


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emergence of powerful new technologies, the existence of large quantities of data, and increasing demands for the extraction of added value from these technologies and data have created a number of significant challenges for those charged with both corporate and information technology management. The possibilities are great, the expectations high, and the risks significant. Organisations seeking to employ cloud technologies and exploit the value of the data to which they have access, be this in the form of "Big Data" available from different external sources or data held within the organisation, in structured or unstructured formats, need to understand the risks involved in such activities. Data owners have responsibilities towards the subjects of the data and must also, frequently, demonstrate that they are in compliance with current standards, laws and regulations. This thesis sets out to explore the nature of the technologies that organisations might utilise, identify the most pertinent constraints and risks, and propose a framework for the management of data from discovery to external hosting that will allow the most significant risks to be managed through the definition, implementation, and performance of appropriate internal control activities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The graphical representation of spatial soil properties in a digital environment is complex because it requires a conversion of data collected in a discrete form onto a continuous surface. The objective of this study was to apply three-dimension techniques of interpolation and visualization on soil texture and fertility properties and establish relationships with pedogenetic factors and processes in a slope area. The GRASS Geographic Information System was used to generate three-dimensional models and ParaView software to visualize soil volumes. Samples of the A, AB, BA, and B horizons were collected in a regular 122-point grid in an area of 13 ha, in Pinhais, PR, in southern Brazil. Geoprocessing and graphic computing techniques were effective in identifying and delimiting soil volumes of distinct ranges of fertility properties confined within the soil matrix. Both three-dimensional interpolation and the visualization tool facilitated interpretation in a continuous space (volumes) of the cause-effect relationships between soil texture and fertility properties and pedological factors and processes, such as higher clay contents following the drainage lines of the area. The flattest part with more weathered soils (Oxisols) had the highest pH values and lower Al3+ concentrations. These techniques of data interpolation and visualization have great potential for use in diverse areas of soil science, such as identification of soil volumes occurring side-by-side but that exhibit different physical, chemical, and mineralogical conditions for plant root growth, and monitoring of plumes of organic and inorganic pollutants in soils and sediments, among other applications. The methodological details for interpolation and a three-dimensional view of soil data are presented here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In June 2011, Lettie Prell released a report entitled Iowa Recidivism Report: Prison Return Rates (FY2007 Releases Tracked for 3 Years), in which she finds that those prison inmates who are paroled are less likely to return to prison due to a new conviction than are inmates who leave prison due to expiration ofsentence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report presents the results of work zone field data analyzed on interstate highways in Missouri to determine the mean breakdown and queue-discharge flow rates as measures of capacity. Several days of traffic data collected at a work zone near Pacific, Missouri with a speed limit of 50 mph were analyzed in both the eastbound and westbound directions. As a result, a total of eleven breakdown events were identified using average speed profiles. The traffic flows prior to and after the onset of congestion were studied. Breakdown flow rates ranged between 1194 to 1404 vphpl, with an average of 1295 vphpl, and a mean queue discharge rate of 1072 vphpl was determined. Mean queue discharge, as used by the Highway Capacity Manual 2000 (HCM), in terms of pcphpl was found to be 1199, well below the HCM’s average capacity of 1600 pcphpl. This reduced capacity found at the site is attributable mainly to narrower lane width and higher percentage of heavy vehicles, around 25%, in the traffic stream. The difference found between mean breakdown flow (1295 vphpl) and queue-discharge flow (1072 vphpl) has been observed widely, and is due to reduced traffic flow once traffic breaks down and queues start to form. The Missouri DOT currently uses a spreadsheet for work zone planning applications that assumes the same values of breakdown and mean queue discharge flow rates. This study proposes that breakdown flow rates should be used to forecast the onset of congestion, whereas mean queue discharge flow rates should be used to estimate delays under congested conditions. Hence, it is recommended that the spreadsheet be refined accordingly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The following information is in this report produced by the CJJP: 1. Rate of reported rapes in Iowa, the west North Central states, and the U.S. 2. Statewide sex offense charges and convictions. 3. Iowa convictions for sex offenses, by class. 4. Sex offender prison and probation entries, FY2005. 5. Prison admissions for sex offenses, FY1995-2005. 6. Sex offender releases from Iowa prisons, 1990 and 1996-2005. 7. Three- year rates of sex offender recidivism. 8. Three- year rates of sex offender recidivism, by parolees and expiration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents the Multiple Kernel Learning (MKL) approach as a modelling and data exploratory tool and applies it to the problem of wind speed mapping. Support Vector Regression (SVR) is used to predict spatial variations of the mean wind speed from terrain features (slopes, terrain curvature, directional derivatives) generated at different spatial scales. Multiple Kernel Learning is applied to learn kernels for individual features and thematic feature subsets, both in the context of feature selection and optimal parameters determination. An empirical study on real-life data confirms the usefulness of MKL as a tool that enhances the interpretability of data-driven models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The overall system is designed to permit automatic collection of delamination field data for bridge decks. In addition to measuring and recording the data in the field, the system provides for transferring the recorded data to a personal computer for processing and plotting. This permits rapid turnaround from data collection to a finished plot of the results in a fraction of the time previously required for manual analysis of the analog data captured on a strip chart recorder. In normal operation the Delamtect provides an analog voltage for each of two channels which is proportional to the extent of any delamination. These voltages are recorded on a strip chart for later visual analysis. An event marker voltage, produced by a momentary push button on the handle, is also provided by the Delamtect and recorded on a third channel of the analog recorder.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of lightweight aggregates in prestressed concrete is becoming more of a reality as our design criteria become more demanding. Bridge girders of greater lengths have been restricted from travel on many of our highways because the weight of the combined girders and transporting vehicle is excessive making hauls of any distance prohibitive. This, along with new safety recommendations, prompted the State of Iowa to investigate the use of lightweight aggregate bridge girders. A series of three projects was started to investigate the possibility of using lightweight aggregate in prestressed concrete. The object of this project is to study the effect which lightweight aggregate concrete has on the camber of bridge girders when used in a field situation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report describes the development of performance measures for the Iowa DOT Construction Offices. The offices are responsible for administering all transportation construction projects for the Iowa DOT. In conjunction with a steering team composed of representatives of the Construction Offices, the research team developed a list of eight key processes and a set of measures for each. Two kinds of data were gathered: baseline data and benchmark data. Baseline data is used to characterize current performance. Benchmark data is gathered to find organizations that have excellent performance records for one or more key functions. This report discusses the methodology used and the results obtained. The data obtained represents the first set of data points. Subsequent years will establish trends for each of the measures, showing improvement or lack of it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: There is an ongoing debate as to whether combined antiretroviral treatment (cART) during pregnancy is an independent risk factor for prematurity in HIV-1-infected women. OBJECTIVE: The aim of the study was to examine (1) crude effects of different ART regimens on prematurity, (2) the association between duration of cART and duration of pregnancy, and (3) the role of possibly confounding risk factors for prematurity. METHOD: We analysed data from 1180 pregnancies prospectively collected by the Swiss Mother and Child HIV Cohort Study (MoCHiV) and the Swiss HIV Cohort Study (SHCS). RESULTS: Odds ratios for prematurity in women receiving mono/dual therapy and cART were 1.8 [95% confidence interval (CI) 0.85-3.6] and 2.5 (95% CI 1.4-4.3) compared with women not receiving ART during pregnancy (P=0.004). In a subgroup of 365 pregnancies with comprehensive information on maternal clinical, demographic and lifestyle characteristics, there was no indication that maternal viral load, age, ethnicity or history of injecting drug use affected prematurity rates associated with the use of cART. Duration of cART before delivery was also not associated with duration of pregnancy. CONCLUSION: Our study indicates that confounding by maternal risk factors or duration of cART exposure is not a likely explanation for the effects of ART on prematurity in HIV-1-infected women.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plant growth analysis presents difficulties related to statistical comparison of growth rates, and the analysis of variance of primary data could guide the interpretation of results. The objective of this work was to evaluate the analysis of variance of data from distinct harvests of an experiment, focusing especially on the homogeneity of variances and the choice of an adequate ANOVA model. Data from five experiments covering different crops and growth conditions were used. From the total number of variables, 19% were originally homoscedastic, 60% became homoscedastic after logarithmic transformation, and 21% remained heteroscedastic after transformation. Data transformation did not affect the F test in one experiment, whereas in the other experiments transformation modified the F test usually reducing the number of significant effects. Even when transformation has not altered the F test, mean comparisons led to divergent interpretations. The mixed ANOVA model, considering harvest as a random effect, reduced the number of significant effects of every factor which had the F test modified by this model. Examples illustrated that analysis of variance of primary variables provides a tool for identifying significant differences in growth rates. The analysis of variance imposes restrictions to experimental design thereby eliminating some advantages of the functional growth analysis.