900 resultados para Data Structures, Cryptology and Information Theory
Resumo:
This paper explores the possibility of using data from social bookmarking services to measure the use of information by academic researchers. Social bookmarking data can be used to augment participative methods (e.g. interviews and surveys) and other, non-participative methods (e.g. citation analysis and transaction logs) to measure the use of scholarly information. We use BibSonomy, a free resource-sharing system, as a case study. Results show that published journal articles are by far the most popular type of source bookmarked, followed by conference proceedings and books. Commercial journal publisher platforms are the most popular type of information resource bookmarked, followed by websites, records in databases and digital repositories. Usage of open access information resources is low in comparison with toll access journals. In the case of open access repositories, there is a marked preference for the use of subject-based repositories over institutional repositories. The results are consistent with those observed in related studies based on surveys and citation analysis, confirming the possible use of bookmarking data in studies of information behaviour in academic settings. The main advantages of using social bookmarking data are that is an unobtrusive approach, it captures the reading habits of researchers who are not necessarily authors, and data are readily available. The main limitation is that a significant amount of human resources is required in cleaning and standardizing the data.
Resumo:
PURPOSE: The current study tested the applicability of Jessor's problem behavior theory (PBT) in national probability samples from Georgia and Switzerland. Comparisons focused on (1) the applicability of the problem behavior syndrome (PBS) in both developmental contexts, and (2) on the applicability of employing a set of theory-driven risk and protective factors in the prediction of problem behaviors. METHODS: School-based questionnaire data were collected from n = 18,239 adolescents in Georgia (n = 9499) and Switzerland (n = 8740) following the same protocol. Participants rated five measures of problem behaviors (alcohol and drug use, problems because of alcohol and drug use, and deviance), three risk factors (future uncertainty, depression, and stress), and three protective factors (family, peer, and school attachment). Final study samples included n = 9043 Georgian youth (mean age = 15.57; 58.8% females) and n = 8348 Swiss youth (mean age = 17.95; 48.5% females). Data analyses were completed using structural equation modeling, path analyses, and post hoc z-tests for comparisons of regression coefficients. RESULTS: Findings indicated that the PBS replicated in both samples, and that theory-driven risk and protective factors accounted for 13% and 10% in Georgian and Swiss samples, respectively in the PBS, net the effects by demographic variables. Follow-up z-tests provided evidence of some differences in the magnitude, but not direction, in five of six individual paths by country. CONCLUSION: PBT and the PBS find empirical support in these Eurasian and Western European samples; thus, Jessor's theory holds value and promise in understanding the etiology of adolescent problem behaviors outside of the United States.
Resumo:
The emergence of powerful new technologies, the existence of large quantities of data, and increasing demands for the extraction of added value from these technologies and data have created a number of significant challenges for those charged with both corporate and information technology management. The possibilities are great, the expectations high, and the risks significant. Organisations seeking to employ cloud technologies and exploit the value of the data to which they have access, be this in the form of "Big Data" available from different external sources or data held within the organisation, in structured or unstructured formats, need to understand the risks involved in such activities. Data owners have responsibilities towards the subjects of the data and must also, frequently, demonstrate that they are in compliance with current standards, laws and regulations. This thesis sets out to explore the nature of the technologies that organisations might utilise, identify the most pertinent constraints and risks, and propose a framework for the management of data from discovery to external hosting that will allow the most significant risks to be managed through the definition, implementation, and performance of appropriate internal control activities.
Resumo:
Introduction: Survival of children born prematurely or with very low birth weight has increased dramatically, but the long term developmental outcome remains unknown. Many children have deficits in cognitive capacities, in particular involving executive domains and those disabilities are likely to involve a central nervous system deficit. To understand their neurostructural origin, we use DTI. Structurally segregated and functionally regions of the cerebral cortex are interconnected by a dense network of axonal pathways. We noninvasively map these pathways across cortical hemispheres and construct normalized structural connection matrices derived from DTI MR tractography. Group comparisons of brain connectivity reveal significant changes in fiber density in case of children with poor intrauterine grown and extremely premature children (gestational age<28 weeks at birth) compared to control subjects. This changes suggest a link between cortico-axonal pathways and the central nervous system deficit. Methods: Sixty premature born infants (5-6 years old) were scanned on clinical 3T scanner (Magnetom Trio, Siemens Medical Solutions, Erlangen, Germany) at two hospitals (HUG, Geneva and CHUV, Lausanne). For each subject, T1-weighted MPRAGE images (TR/TE=2500/2.91,TI=1100, resolution=1x1x1mm, matrix=256x154) and DTI images (30 directions, TR/TE=10200/107, in-plane resolution=1.8x1.8x2mm, 64 axial, matrix=112x112) were acquired. Parent(s) provided written consent on prior ethical board approval. The extraction of the Whole Brain Structural Connectivity Matrix was performed following (Cammoun, 2009 and Hagmann, 2008). The MPARGE images were registered using an affine registration to the non-weighted-DTI and WM-GM segmentation performed on it. In order to have equal anatomical localization among subjects, 66 cortical regions with anatomical landmarks were created using the curvature information, i.e. sulcus and gyrus (Cammoun et al, 2007; Fischl et al, 2004; Desikan et al, 2006) with freesurfer software (http://surfer.nmr.mgh.harvard.edu/). Tractography was performed in WM using an algorithm especially designed for DTI/DSI data (Hagmann et al., 2007) and both information were then combined in a matrix. Each row and column of the matrix corresponds to a particular ROI. Each cell of index (i,j) represents the fiber density of the bundle connecting the ROIs i and j. Subdividing each cortical region, we obtained 4 Connectivity Matrices of different resolution (33, 66, 125 and 250 ROI/hemisphere) for each subject . Subjects were sorted in 3 different groups, namely (1) control, (2) Intrauterine Growth Restriction (IUGR), (3) Extreme Prematurity (EP), depending on their gestational age, weight and percentile-weight score at birth. Group-to-group comparisons were performed between groups (1)-(2) and (1)-(3). The mean age at examination of the three groups were similar. Results: Quantitative analysis were performed between groups to determine fibers density differences. For each group, a mean connectivity matrix with 33ROI/hemisphere resolution was computed. On the other hand, for all matrix resolutions (33,66,125,250 ROI/hemisphere), the number of bundles were computed and averaged. As seen in figure 1, EP and IUGR subjects present an overall reduction of fibers density in both interhemispherical and intrahemispherical connections. This is given quantitatively in table 1. IUGR subjects presents a higher percentage of missing fiber bundles than EP when compared to control subjects (~16% against 11%). When comparing both groups to control subjects, for the EP subjects, the occipito-parietal regions seem less interhemispherically connected whilst the intrahemispherical networks present lack of fiber density in the lymbic system. Children born with IUGR, have similar reductions in interhemispherical connections than the EP. However, the cuneus and precuneus connections with the precentral and paracentral lobe are even lower than in the case of the EP. For the intrahemispherical connections the IUGR group preset a loss of fiber density between the deep gray matter structures (striatum) and the frontal and middlefrontal poles, connections typically involved in the control of executive functions. For the qualitative analysis, a t-test comparing number of bundles (p-value<0.05) gave some preliminary significant results (figure 2). Again, even if both IUGR and EP appear to have significantly less connections comparing to the control subjects, the IUGR cohort seems to present a higher lack of fiber density specially relying the cuneus, precuneus and parietal areas. In terms of fiber density, preliminary Wilcoxon tests seem to validate the hypothesis set by the previous analysis. Conclusions: The goal of this study was to determine the effect of extreme prematurity and poor intrauterine growth on neurostructural development at the age of 6 years-old. This data indicates that differences in connectivity may well be the basis for the neurostructural and neuropsychological deficit described in these populations in the absence of overt brain lesions (Inder TE, 2005; Borradori-Tolsa, 2004; Dubois, 2008). Indeed, we suggest that IUGR and prematurity leads to alteration of connectivity between brain structures, especially in occipito-parietal and frontal lobes for EP and frontal and middletemporal poles for IUGR. Overall, IUGR children have a higher loss of connectivity in the overall connectivity matrix than EP children. In both cases, the localized alteration of connectivity suggests a direct link between cortico-axonal pathways and the central nervous system deficit. Our next step is to link these connectivity alterations to the performance in executive function tests.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
The overall system is designed to permit automatic collection of delamination field data for bridge decks. In addition to measuring and recording the data in the field, the system provides for transferring the recorded data to a personal computer for processing and plotting. This permits rapid turnaround from data collection to a finished plot of the results in a fraction of the time previously required for manual analysis of the analog data captured on a strip chart recorder. In normal operation the Delamtect provides an analog voltage for each of two channels which is proportional to the extent of any delamination. These voltages are recorded on a strip chart for later visual analysis. An event marker voltage, produced by a momentary push button on the handle, is also provided by the Delamtect and recorded on a third channel of the analog recorder.
Resumo:
The Iowa Department of Transportation (DOT) is responsible for approximately 4,100 bridges and structures that are a part of the state’s primary highway system, which includes the Interstate, US, and Iowa highway routes. A pilot study was conducted for six bridges in two Iowa river basins—the Cedar River Basin and the South Skunk River Basin—to develop a methodology to evaluate their vulnerability to climate change and extreme weather. The six bridges had been either closed or severely stressed by record streamflow within the past seven years. An innovative methodology was developed to generate streamflow scenarios given climate change projections. The methodology selected appropriate rainfall projection data to feed into a streamflow model that generated continuous peak annual streamflow series for 1960 through 2100, which were used as input to PeakFQ to estimate return intervals for floods. The methodology evaluated the plausibility of rainfall projections and credibility of streamflow simulation while remaining consistent with U.S. Geological Survey (USGS) protocol for estimating the return interval for floods. The results were conveyed in an innovative graph that combined historical and scenario-based design metrics for use in bridge vulnerability analysis and engineering design. The pilot results determined the annual peak streamflow response to climate change likely will be basin-size dependent, four of the six pilot study bridges would be exposed to increased frequency of extreme streamflow and would have higher frequency of overtopping, the proposed design for replacing the Interstate 35 bridges over the South Skunk River south of Ames, Iowa is resilient to climate change, and some Iowa DOT bridge design policies could be reviewed to consider incorporating climate change information.
Resumo:
Large Dynamic Message Signs (DMSs) have been increasingly used on freeways, expressways and major arterials to better manage the traffic flow by providing accurate and timely information to drivers. Overhead truss structures are typically employed to support those DMSs allowing them to provide wider display to more lanes. In recent years, there is increasing evidence that the truss structures supporting these large and heavy signs are subjected to much more complex loadings than are typically accounted for in the codified design procedures. Consequently, some of these structures have required frequent inspections, retrofitting, and even premature replacement. Two manufacturing processes are primarily utilized on truss structures - welding and bolting. Recently, cracks at welding toes were reported for the structures employed in some states. Extremely large loads (e.g., due to high winds) could cause brittle fractures, and cyclic vibration (e.g., due to diurnal variation in temperature or due to oscillations in the wind force induced by vortex shedding behind the DMS) may lead to fatigue damage, as these are two major failures for the metallic material. Wind and strain resulting from temperature changes are the main loads that affect the structures during their lifetime. The American Association of State Highway and Transportation Officials (AASHTO) Specification defines the limit loads in dead load, wind load, ice load, and fatigue design for natural wind gust and truck-induced gust. The objectives of this study are to investigate wind and thermal effects in the bridge type overhead DMS truss structures and improve the current design specifications (e.g., for thermal design). In order to accomplish the objective, it is necessary to study structural behavior and detailed strain-stress of the truss structures caused by wind load on the DMS cabinet and thermal load on the truss supporting the DMS cabinet. The study is divided into two parts. The Computational Fluid Dynamics (CFD) component and part of the structural analysis component of the study were conducted at the University of Iowa while the field study and related structural analysis computations were conducted at the Iowa State University. The CFD simulations were used to determine the air-induced forces (wind loads) on the DMS cabinets and the finite element analysis was used to determine the response of the supporting trusses to these pressure forces. The field observation portion consisted of short-term monitoring of several DMS Cabinet/Trusses and long-term monitoring of one DMS Cabinet/Truss. The short-term monitoring was a single (or two) day event in which several message sign panel/trusses were tested. The long-term monitoring field study extended over several months. Analysis of the data focused on trying to identify important behaviors under both ambient and truck induced winds and the effect of daily temperature changes. Results of the CFD investigation, field experiments and structural analysis of the wind induced forces on the DMS cabinets and their effect on the supporting trusses showed that the passage of trucks cannot be responsible for the problems observed to develop at trusses supporting DMS cabinets. Rather the data pointed toward the important effect of the thermal load induced by cyclic (diurnal) variations of the temperature. Thermal influence is not discussed in the specification, either in limit load or fatigue design. Although the frequency of the thermal load is low, results showed that when temperature range is large the restress range would be significant to the structure, especially near welding areas where stress concentrations may occur. Moreover stress amplitude and range are the primary parameters for brittle fracture and fatigue life estimation. Long-term field monitoring of one of the overhead truss structures in Iowa was used as the research baseline to estimate the effects of diurnal temperature changes to fatigue damage. The evaluation of the collected data is an important approach for understanding the structural behavior and for the advancement of future code provisions. Finite element modeling was developed to estimate the strain and stress magnitudes, which were compared with the field monitoring data. Fatigue life of the truss structures was also estimated based on AASHTO specifications and the numerical modeling. The main conclusion of the study is that thermal induced fatigue damage of the truss structures supporting DMS cabinets is likely a significant contributing cause for the cracks observed to develop at such structures. Other probable causes for fatigue damage not investigated in this study are the cyclic oscillations of the total wind load associated with the vortex shedding behind the DMS cabinet at high wind conditions and fabrication tolerances and induced stresses due to fitting of tube to tube connections.
Resumo:
The Vertical Clearance Log is prepared for the purpose of providing vertical clearance restrictions by route on the primary road system. This report is used by the Iowa Department of Transportation’s Motor Carrier Services to route oversize vehicles around structures with vertical restrictions too low for the cargo height. The source of the data is the Geographic Information Management System (GIMS) that is managed by the Office of Research & Analytics in the Performance & Technology Division. The data is collected by inspection crews and through the use of LiDAR technology to reflect changes to structures on the primary road system. This log is produced annually.
Resumo:
Under optimal non-physiological conditions of low concentrations and low temperatures, proteins may spontaneously fold to the native state, as all the information for folding lies in the amino acid sequence of the polypeptide. However, under conditions of stress or high protein crowding as inside cells, a polypeptide may misfold and enter an aggregation pathway resulting in the formation of misfolded conformers and fibrils, which can be toxic and lead to neurodegenerative illnesses, such as Alzheimer's, Parkinson's or Huntington's diseases and aging in general. To avert and revert protein misfolding and aggregation, cells have evolved a set of proteins called molecular chaperones. Here, I focussed on the human cytosolic chaperones Hsp70 (DnaK) and HspllO, and co-chaperone Hsp40 (DnaJ), and the chaperonin CCT (GroEL). The cytosolic molecular chaperones Hsp70s/Hspll0s and the chaperonins are highly upregulated in bacterial and human cells under different stresses and are involved both in the prevention and the reversion of protein misfolding and aggregation. Hsp70 works in collaboration with Hsp40 to reactivate misfolded or aggregated proteins in a strict ATP dependent manner. Chaperonins (CCT and GroEL) also unfold and reactivate stably misfolded proteins but we found that it needed to use the energy of ATP hydrolysis in order to evict over- sticky misfolded intermediates that inhibited the unfoldase catalytic sites. Ill In this study, we initially characterized a particular type of inactive misfolded monomeric luciferase and rhodanese species that were obtained by repeated cycles of freeze-thawing (FT). These stable misfolded monomeric conformers (FT-luciferase and FT-rhodanese) had exposed hydrophobic residues and were enriched with wrong ß-sheet structures (Chapter 2). Using FT-luciferase as substrate, we found that the Hsp70 orthologs, called HspllO (Sse in yeast), acted similarly to Hsp70 as were bona fide ATP- fuelled polypeptide unfoldases and was much more than a mere nucleotide exchange factor, as generally thought. Moreover, we found that HspllO collaborated with Hsp70 in the disaggregation of stable protein aggregates in which Hsp70 and HspllO acted as equal partners that synergistically combined their individual ATP-consuming polypeptide unfoldase activities to reactivate the misfolded/aggregated proteins (Chapter 3). Using FT-rhodanese as substrate, we found that chaperonins (GroEL and CCT) could catalytically reactivate misfolded rhodanese monomers in the absence of ATP. Also, our results suggested that encaging of an unfolding polypeptide inside the GroEL cavity under a GroES cap was not an obligatory step as generally thought (Chapter 4). Further, we investigated the role of Hsp40, a J-protein co-chaperone of Hsp70, in targeting misfolded polypeptides substrates onto Hsp70 for unfolding. We found that even a large excess of monomeric unfolded a-synuclein did not inhibit DnaJ, whereas, in contrast, stable misfolded a-synuclein oligomers strongly inhibited the DnaK-mediated chaperone reaction by way of sequestering the DnaJ co-chaperone. This work revealed that DnaJ could specifically distinguish, and bind potentially toxic stably aggregated species, such as soluble a-synuclein oligomers involved in Parkinson's disease, and with the help of DnaK and ATP convert them into from harmless natively unfolded a-synuclein monomers (chapter 5). Finally, our meta-analysis of microarray data of plant and animal tissues treated with various chemicals and abiotic stresses, revealed possible co-expressions between core chaperone machineries and their co-chaperone regulators. It clearly showed that protein misfolding in the cytosol elicits a different response, consisting of upregulating the synthesis mainly of cytosolic chaperones, from protein misfolding in the endoplasmic reticulum (ER) that elicited a typical unfolded protein response (UPR), consisting of upregulating the synthesis mainly of ER chaperones. We proposed that drugs that best mimicked heat or UPR stress at increasing the chaperone load in the cytoplasm or ER respectively, may prove effective at combating protein misfolding diseases and aging (Chapter 6). - Dans les conditions optimales de basse concentration et de basse température, les protéines vont spontanément adopter un repliement natif car toutes les informations nécessaires se trouvent dans la séquence des acides aminés du polypeptide. En revanche, dans des conditions de stress ou de forte concentration des protéines comme à l'intérieur d'une cellule, un polypeptide peu mal se replier et entrer dans un processus d'agrégation conduisant à la formation de conformères et de fibrilles qui peuvent être toxiques et causer des maladies neurodégénératives comme la maladie d'Alzheimer, la maladie de Parkinson ou la chorée de Huntington. Afin d'empêcher ou de rectifier le mauvais repliement des protéines, les cellules ont développé des protéines appelées chaperonnes. Dans ce travail, je me suis intéressé aux chaperonnes cytosoliques Hsp70 (DnaK) et HspllO, la co-chaperones Hsp40 (DnaJ), le complexe CCT/TRiC et GroEL. Chez les bactéries et les humains, les chaperonnes cytosoliques Hsp70s/Hspl 10s et les « chaperonines» sont fortement activées par différentes conditions de stress et sont toutes impliquées dans la prévention et la correction du mauvais repliement des protéines et de leur agrégation. Hsp70 collabore avec Hsp40 pour réactiver les protéines agrégées ou mal repliées et leur action nécessite de 1ATP. Les chaperonines (GroEL) déplient et réactivent aussi les protéines mal repliées de façon stable mais nous avons trouvé qu'elles utilisent l'ATP pour libérer les intermédiaires collant et mal repliés du site catalytique de dépliage. Nous avons initialement caractérisé un type particulier de formes stables de luciférase et de rhodanese monomériques mal repliées obtenues après plusieurs cycles de congélation / décongélation répétés (FT). Ces monomères exposaient des résidus hydrophobiques et étaient plus riches en feuillets ß anormaux. Ils pouvaient cependant être réactivés par les chaperonnes Hsp70+Hsp40 (DnaK+DnaJ) et de l'ATP, ou par Hsp60 (GroEL) sans ATP (Chapitre 2). En utilisant la FT-Luciferase comme substrat nous avons trouvé que HspllO (un orthologue de Hsp70) était une authentique dépliase, dépendante strictement de l'ATP. De plus, nous avons trouvé que HspllO collaborait avec Hsp70 dans la désagrégation d'agrégats stables de protéines en combinant leurs activités dépliase consommatrice d'ATP (Chapitre 3). En utilisant la FT-rhodanese, nous avons trouvé que les chaperonines (GroEL et CCT) pouvaient réactiver catalytiquement des monomères mal repliés en absence d'ATP. Nos résultats suggérèrent également que la capture d'un polypeptide en cours de dépliement dans la cavité de GroEL et sous un couvercle du complexe GroES ne serait pas une étape obligatoire du mécanisme, comme il est communément accepté dans la littérature (Chapitre 4). De plus, nous avons étudié le rôle de Hsp40, une co-chaperones de Hsp70, dans l'adressage de substrats polypeptidiques mal repliés vers Hsp70. Ce travail a révélé que DnaJ pouvait différencier et lier des polypeptide mal repliés (toxiques), comme des oligomères d'a-synucléine dans la maladie de Parkinson, et clairement les différencier des monomères inoffensifs d'a-synucléine (Chapitre 5). Finalement une méta-analyse de données de microarrays de tissus végétaux et animaux traités avec différents stress chimiques et abiotiques a révélé une possible co-expression de la machinerie des chaperonnes et des régulateurs de co- chaperonne. Cette meta-analyse montre aussi clairement que le mauvais repliement des protéines dans le cytosol entraîne la synthèse de chaperonnes principalement cytosoliques alors que le mauvais repliement de protéines dans le réticulum endoplasmique (ER) entraine une réponse typique de dépliement (UPR) qui consiste principalement en la synthèse de chaperonnes localisées dans l'ER. Nous émettons l'hypothèse que les drogues qui reproduisent le mieux les stress de chaleur ou les stress UPR pourraient se montrer efficaces dans la lutte contre le mauvais repliement des protéines et le vieillissement (Chapitre 6).
Resumo:
Rural intersections account for 30% of crashes in rural areas and 6% of all fatal crashes, representing a significant but poorly understood safety problem. Transportation agencies have traditionally implemented countermeasures to address rural intersection crashes but frequently do not understand the dynamic interaction between the driver and roadway and the driver factors leading to these types of crashes. The Second Strategic Highway Research Program (SHRP 2) conducted a large-scale naturalistic driving study (NDS) using instrumented vehicles. The study has provided a significant amount of on-road driving data for a range of drivers. The present study utilizes the SHRP 2 NDS data as well as SHRP 2 Roadway Information Database (RID) data to observe driver behavior at rural intersections first hand using video, vehicle kinematics, and roadway data to determine how roadway, driver, environmental, and vehicle factors interact to affect driver safety at rural intersections. A model of driver braking behavior was developed using a dataset of vehicle activity traces for several rural stop-controlled intersections. The model was developed using the point at which a driver reacts to the upcoming intersection by initiating braking as its dependent variable, with the driver’s age, type and direction of turning movement, and countermeasure presence as independent variables. Countermeasures such as on-pavement signing and overhead flashing beacons were found to increase the braking point distance, a finding that provides insight into the countermeasures’ effect on safety at rural intersections. The results of this model can lead to better roadway design, more informed selection of traffic control and countermeasures, and targeted information that can inform policy decisions. Additionally, a model of gap acceptance was attempted but was ultimately not developed due to the small size of the dataset. However, a protocol for data reduction for a gap acceptance model was determined. This protocol can be utilized in future studies to develop a gap acceptance model that would provide additional insight into the roadway, vehicle, environmental, and driver factors that play a role in whether a driver accepts or rejects a gap.
Resumo:
In anticipation of regulation involving numeric turbidity limit at highway construction sites, research was done into the most appropriate, affordable methods for surface water monitoring. Measuring sediment concentration in streams may be conducted a number of ways. As part of a project funded by the Iowa Department of Transportation, several testing methods were explored to determine the most affordable, appropriate methods for data collection both in the field and in the lab. The primary purpose of the research was to determine the exchangeability of the acrylic transparency tube for water clarity analysis as compared to the turbidimeter.
Resumo:
The Wigner higher order moment spectra (WHOS)are defined as extensions of the Wigner-Ville distribution (WD)to higher order moment spectra domains. A general class oftime-frequency higher order moment spectra is also defined interms of arbitrary higher order moments of the signal as generalizations of the Cohen’s general class of time-frequency representations. The properties of the general class of time-frequency higher order moment spectra can be related to theproperties of WHOS which are, in fact, extensions of the properties of the WD. Discrete time and frequency Wigner higherorder moment spectra (DTF-WHOS) distributions are introduced for signal processing applications and are shown to beimplemented with two FFT-based algorithms. One applicationis presented where the Wigner bispectrum (WB), which is aWHOS in the third-order moment domain, is utilized for thedetection of transient signals embedded in noise. The WB iscompared with the WD in terms of simulation examples andanalysis of real sonar data. It is shown that better detectionschemes can be derived, in low signal-to-noise ratio, when theWB is applied.
Resumo:
Diplomityön tavoitteena on tutkia mitä uusia tiedonhallinnallisia ongelmia ilmenee, kun massaräätälöidyn tuotteen tuotetieto hallitaan läpi tuotteen elinkaaren, sekä miten nämä ongelmat voitaisiin ratkaista. Ongelmat ja haasteet kerätään kirjallisuuslähteistä ja massaräätälöintiprosessi yhdistetään PLM-vaiheisiin. Ratkaisua tutkitaan testaamalla kuinka standardit STEP ja PLCS sekä standardeja tukeva PLM järjestelmä voisivat tukea massaräätälöidyn tuotteen elinkaaren tiedonhallintaa. MC tuotteiden ongelmia ovat tuoterakenteen monimutkaisuus, jäljitettävyys ja muutosten hallinta läpi elinkaaren. STEP ja PLCS pystyvät kummatkin tahollaan tukemaan tiedonhallintaa. MC-tuotteen geneerinen tuoterakenne on kuitenkin manuaalisesti liittettävä elinkaaritiedon tukemiseen. PLM-järjestelmä pystyy tukemaan MC-tuotteiden elinkaarta, mutta koska toiminto ei ole järjestelmään sisäänrakennettuna, MC-tuotteiden tukemisen parantamisessa on edelleen haasteita.
Resumo:
The objective of this thesis is to find out how information and communication technology affects the global consumption of printing and writing papers. Another objective is to find out, whether there are differences between paper grades in these effects. The empirical analysis is conducted by linear regression analysis using three sets of country-level panel data from 1990-2006. Data set of newsprint contains 95 countries, data set of uncoated woodfree paper 61 countries and data set of coated mechanical paper 42 countries. The material is based on paper consumption data of RISI’s Industry Statistics Database and on the information and communication technology data of GMID-database. Results indicate that number of Internet users has statistically significant negative effect on the consumption of newsprint and on the consumption of coated mechanical paper and number of mobile telephone users has positive effect on the consumptions of these papers. Results also indicate that information and communication technologies have only small effect on consumption of uncoated woodfree paper or no significant effect at all, but these results are more uncertain to some extent.