905 resultados para Web Mining, Data Mining, User Topic Model, Web User Profiles


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Researches in Requirements Engineering have been growing in the latest few years. Researchers are concerned with a set of open issues such as: communication between several user profiles involved in software engineering; scope definition; volatility and traceability issues. To cope with these issues a set of works are concentrated in (i) defining processes to collect client s specifications in order to solve scope issues; (ii) defining models to represent requirements to address communication and traceability issues; and (iii) working on mechanisms and processes to be applied to requirements modeling in order to facilitate requirements evolution and maintenance, addressing volatility and traceability issues. We propose an iterative Model-Driven process to solve these issues, based on a double layered CIM to communicate requirements related knowledge to a wider amount of stakeholders. We also present a tool to help requirements engineer through the RE process. Finally we present a case study to illustrate the process and tool s benefits and usage

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internal and external computer network attacks or security threats occur according to standards and follow a set of subsequent steps, allowing to establish profiles or patterns. This well-known behavior is the basis of signature analysis intrusion detection systems. This work presents a new attack signature model to be applied on network-based intrusion detection systems engines. The AISF (ACME! Intrusion Signature Format) model is built upon XML technology and works on intrusion signatures handling and analysis, from storage to manipulation. Using this new model, the process of storing and analyzing information about intrusion signatures for further use by an IDS become a less difficult and standardized process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Obesity is an increasing problem in several countries, leading to health problems. Physical exercise, in turn, can be used effectively by itself or in combination with dietary restriction to trigger weight loss. The present study was designed to evaluate the effects of aerobic exercise training on lipid profile of obese male Wistar rats in order to verify if this model may be of value for the study of exercise in obesity. Obesity was induced by MSG administration (4mg/g, each other day, from birth to 14 days old) After 14 from drug administration, the rats were separated into two groups: MSG-S (sedentary) and MSG-T (exercise trained). Exercise training consisted in 1h/day, 5 days/week, with an overload of 5% bw, for 10 weeks. Rats of the same age and strain, receiving saline at birth, were used as control (C), and subdivided into two groups: C-S and C-T. At the end of the experimental period, MSG-T and C-T rats showed similar blood lactate and muscle glycogen responses to exercise training and acute exercise. MSG-S rats showed significantly higher carcass fat, serum triacylglycerol, serum insulin and liver total fat than C-S rats. On the other hand, MSG-T rats had lower carcass fat, serum triacylglycerol and liver total fat than MSG-S rats. There were no statistical differences in food intake and serum free fatty acids among the groups studied. These data indicate that this model may be of value for the study of exercise effects on tissue and circulating lipid profile in obesity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semi-supervised learning is applied to classification problems where only a small portion of the data items is labeled. In these cases, the reliability of the labels is a crucial factor, because mislabeled items may propagate wrong labels to a large portion or even the entire data set. This paper aims to address this problem by presenting a graph-based (network-based) semi-supervised learning method, specifically designed to handle data sets with mislabeled samples. The method uses teams of walking particles, with competitive and cooperative behavior, for label propagation in the network constructed from the input data set. The proposed model is nature-inspired and it incorporates some features to make it robust to a considerable amount of mislabeled data items. Computer simulations show the performance of the method in the presence of different percentage of mislabeled data, in networks of different sizes and average node degree. Importantly, these simulations reveals the existence of the critical points of the mislabeled subset size, below which the network is free of wrong label contamination, but above which the mislabeled samples start to propagate their labels to the rest of the network. Moreover, numerical comparisons have been made among the proposed method and other representative graph-based semi-supervised learning methods using both artificial and real-world data sets. Interestingly, the proposed method has increasing better performance than the others as the percentage of mislabeled samples is getting larger. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Last Glacial Maximum simulated sea surface temperature from the Paleo-Climate version of the National Center for Atmospheric Research Coupled Climate Model (NCAR-CCSM) are compared with available reconstructions and data-based products in the tropical and south Atlantic region. Model results are compared to data proxies based on the Multiproxy Approach for the Reconstruction of the Glacial Ocean surface product (MARGO). Results show that the model sea surface temperature is not consistent with the proxy-data in all of the region of interest. Discrepancies are found in the eastern, equatorial and in the high-latitude South Atlantic. The model overestimates the cooling in the southern South Atlantic (near 50 degrees S) shown by the proxy-data. Near the equator, model and proxies are in better agreement. In the eastern part of the equatorial basin the model underestimates the cooling shown by all proxies. A northward shift in the position of the subtropical convergence zone in the simulation suggests a compression or/and an equatorward shift of the subtropical gyre at the surface, consistent with what is observed in the proxy reconstruction. (C) 2008 Elsevier B.V. All rights reserved

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wave breaking is an important coastal process, influencing hydro-morphodynamic processes such as turbulence generation and wave energy dissipation, run-up on the beach and overtopping of coastal defence structures. During breaking, waves are complex mixtures of air and water (“white water”) whose properties affect velocity and pressure fields in the vicinity of the free surface and, depending on the breaker characteristics, different mechanisms for air entrainment are usually observed. Several laboratory experiments have been performed to investigate the role of air bubbles in the wave breaking process (Chanson & Cummings, 1994, among others) and in wave loading on vertical wall (Oumeraci et al., 2001; Peregrine et al., 2006, among others), showing that the air phase is not negligible since the turbulent energy dissipation involves air-water mixture. The recent advancement of numerical models has given valuable insights in the knowledge of wave transformation and interaction with coastal structures. Among these models, some solve the RANS equations coupled with a free-surface tracking algorithm and describe velocity, pressure, turbulence and vorticity fields (Lara et al. 2006 a-b, Clementi et al., 2007). The single-phase numerical model, in which the constitutive equations are solved only for the liquid phase, neglects effects induced by air movement and trapped air bubbles in water. Numerical approximations at the free surface may induce errors in predicting breaking point and wave height and moreover, entrapped air bubbles and water splash in air are not properly represented. The aim of the present thesis is to develop a new two-phase model called COBRAS2 (stands for Cornell Breaking waves And Structures 2 phases), that is the enhancement of the single-phase code COBRAS0, originally developed at Cornell University (Lin & Liu, 1998). In the first part of the work, both fluids are considered as incompressible, while the second part will treat air compressibility modelling. The mathematical formulation and the numerical resolution of the governing equations of COBRAS2 are derived and some model-experiment comparisons are shown. In particular, validation tests are performed in order to prove model stability and accuracy. The simulation of the rising of a large air bubble in an otherwise quiescent water pool reveals the model capability to reproduce the process physics in a realistic way. Analytical solutions for stationary and internal waves are compared with corresponding numerical results, in order to test processes involving wide range of density difference. Waves induced by dam-break in different scenarios (on dry and wet beds, as well as on a ramp) are studied, focusing on the role of air as the medium in which the water wave propagates and on the numerical representation of bubble dynamics. Simulations of solitary and regular waves, characterized by both spilling and plunging breakers, are analyzed with comparisons with experimental data and other numerical model in order to investigate air influence on wave breaking mechanisms and underline model capability and accuracy. Finally, modelling of air compressibility is included in the new developed model and is validated, revealing an accurate reproduction of processes. Some preliminary tests on wave impact on vertical walls are performed: since air flow modelling allows to have a more realistic reproduction of breaking wave propagation, the dependence of wave breaker shapes and aeration characteristics on impact pressure values is studied and, on the basis of a qualitative comparison with experimental observations, the numerical simulations achieve good results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent decade, the request for structural health monitoring expertise increased exponentially in the United States. The aging issues that most of the transportation structures are experiencing can put in serious jeopardy the economic system of a region as well as of a country. At the same time, the monitoring of structures is a central topic of discussion in Europe, where the preservation of historical buildings has been addressed over the last four centuries. More recently, various concerns arose about security performance of civil structures after tragic events such the 9/11 or the 2011 Japan earthquake: engineers looks for a design able to resist exceptional loadings due to earthquakes, hurricanes and terrorist attacks. After events of such a kind, the assessment of the remaining life of the structure is at least as important as the initial performance design. Consequently, it appears very clear that the introduction of reliable and accessible damage assessment techniques is crucial for the localization of issues and for a correct and immediate rehabilitation. The System Identification is a branch of the more general Control Theory. In Civil Engineering, this field addresses the techniques needed to find mechanical characteristics as the stiffness or the mass starting from the signals captured by sensors. The objective of the Dynamic Structural Identification (DSI) is to define, starting from experimental measurements, the modal fundamental parameters of a generic structure in order to characterize, via a mathematical model, the dynamic behavior. The knowledge of these parameters is helpful in the Model Updating procedure, that permits to define corrected theoretical models through experimental validation. The main aim of this technique is to minimize the differences between the theoretical model results and in situ measurements of dynamic data. Therefore, the new model becomes a very effective control practice when it comes to rehabilitation of structures or damage assessment. The instrumentation of a whole structure is an unfeasible procedure sometimes because of the high cost involved or, sometimes, because it’s not possible to physically reach each point of the structure. Therefore, numerous scholars have been trying to address this problem. In general two are the main involved methods. Since the limited number of sensors, in a first case, it’s possible to gather time histories only for some locations, then to move the instruments to another location and replay the procedure. Otherwise, if the number of sensors is enough and the structure does not present a complicate geometry, it’s usually sufficient to detect only the principal first modes. This two problems are well presented in the works of Balsamo [1] for the application to a simple system and Jun [2] for the analysis of system with a limited number of sensors. Once the system identification has been carried, it is possible to access the actual system characteristics. A frequent practice is to create an updated FEM model and assess whether the structure fulfills or not the requested functions. Once again the objective of this work is to present a general methodology to analyze big structure using a limited number of instrumentation and at the same time, obtaining the most information about an identified structure without recalling methodologies of difficult interpretation. A general framework of the state space identification procedure via OKID/ERA algorithm is developed and implemented in Matlab. Then, some simple examples are proposed to highlight the principal characteristics and advantage of this methodology. A new algebraic manipulation for a prolific use of substructuring results is developed and implemented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use long instrumental temperature series together with available field reconstructions of sea-level pressure (SLP) and three-dimensional climate model simulations to analyze relations between temperature anomalies and atmospheric circulation patterns over much of Europe and the Mediterranean for the late winter/early spring (January–April, JFMA) season. A Canonical Correlation Analysis (CCA) investigates interannual to interdecadal covariability between a new gridded SLP field reconstruction and seven long instrumental temperature series covering the past 250 years. We then present and discuss prominent atmospheric circulation patterns related to anomalous warm and cold JFMA conditions within different European areas spanning the period 1760–2007. Next, using a data assimilation technique, we link gridded SLP data with a climate model (EC-Bilt-Clio) for a better dynamical understanding of the relationship between large scale circulation and European climate. We thus present an alternative approach to reconstruct climate for the pre-instrumental period based on the assimilated model simulations. Furthermore, we present an independent method to extend the dynamic circulation analysis for anomalously cold European JFMA conditions back to the sixteenth century. To this end, we use documentary records that are spatially representative for the long instrumental records and derive, through modern analogs, large-scale SLP, surface temperature and precipitation fields. The skill of the analog method is tested in the virtual world of two three-dimensional climate simulations (ECHO-G and HadCM3). This endeavor offers new possibilities to both constrain climate model into a reconstruction mode (through the assimilation approach) and to better asses documentary data in a quantitative way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on the physiological adaptation process has found that stress is associated with the rate of cortisol secretion, the main hormone that reflects stress. However, considerable variation among subjects has been reported. Using a sample of older adults (N=46), we tested the hypothesis that cortisol reactivity is composed of (1) a situation-related component representing hypothalamic influence on cortisol secretion observed on three different occasions, and (2) a stable component representing a general trait responsible for cortisol responses observed from occasion to occasion. LISREL VIII was used to test this hypothesis. Results indicated that a homogeneous reliability model was not supported by the data. A congeneric measurement model represented a better fit to the data. Results suggest that subjects have consistent patterns of response during separate experimental occasions. However, results do not suggest a consistent pattern of response over time. The main implication of these results is that salivary cortisol measures are sensitive to experimental stress situations. As such, this noninvasive method may be useful in examining adaptive responses to stress.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Investigators interested in whether a disease aggregates in families often collect case-control family data, which consist of disease status and covariate information for families selected via case or control probands. Here, we focus on the use of case-control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). To this end, we describe a ACE model for binary family data and then introduce an approach to fitting the model to case-control family data. The structural equation model, which has been described previously, combines a general-family extension of the classic ACE twin model with a (possibly covariate-specific) liability-threshold model for binary outcomes. Our likelihood-based approach to fitting involves conditioning on the proband’s disease status, as well as setting prevalence equal to a pre-specified value that can be estimated from the data themselves if necessary. Simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly-made assumptions hold. These assumptions include: the usual assumptions for the classic ACE and liability-threshold models; assumptions about shared family environment for relative pairs; and assumptions about the case-control family sampling, including single ascertainment. When our approach is used to fit the ACE model to Austrian case-control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Between 1966 and 2003, the Golden-winged Warbler (Vermivora chrysoptera) experienced declines of 3.4% per year in large parts of the breeding range and has been identified by Partners in Flight as one of 28 land birds requiring expedient action to prevent its continued decline. It is currently being considered for listing under the Endangered Species Act. A major step in advancing our understanding of the status and habitat preferences of Golden-winged Warbler populations in the Upper Midwest was initiated by the publication of new predictive spatially explicit Golden-winged Warbler habitat models for the northern Midwest. Here, I use original data on observed Golden-winged Warbler abundances in Wisconsin and Minnesota to compare two population models: the hierarchical spatial count (HSC) model with the Habitat Suitability Index (HSI) model. I assessed how well the field data compared to the model predictions and found that within Wisconsin, the HSC model performed slightly better than the HSI model whereas both models performed relatively equally in Minnesota. For the HSC model, I found a 10% error of commission in Wisconsin and a 24.2% error of commission for Minnesota. Similarly, the HSI model has a 23% error of commission in Minnesota; in Wisconsin due to limited areas where the HSI model predicted absences, there was incomplete data and I was unable to determine the error of commission for the HSI model. These are sites where the model predicted presences and the Golden-winged Warbler did not occur. To compare predicted abundance from the two models, a 3x3 contingency table was used. I found that when overlapped, the models do not complement one another in identifying Golden-winged Warbler presences. To calculate discrepancy between the models, the error of commission shows that the HSI model has only a 6.8% chance of correctly classifying absences in the HSC model. The HSC model has only 3.3% chance of correctly classifying absences in the HSI model. These findings highlight the importance of grasses for nesting, shrubs used for cover and foraging, and trees for song perches and foraging as key habitat characteristics for breeding territory occupancy by singing males.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ethanol-gasoline fuel blends are increasingly being used in spark ignition (SI) engines due to continued growth in renewable fuels as part of a growing renewable portfolio standard (RPS). This leads to the need for a simple and accurate ethanol-gasoline blends combustion model that is applicable to one-dimensional engine simulation. A parametric combustion model has been developed, integrated into an engine simulation tool, and validated using SI engine experimental data. The parametric combustion model was built inside a user compound in GT-Power. In this model, selected burn durations were computed using correlations as functions of physically based non-dimensional groups that have been developed using the experimental engine database over a wide range of ethanol-gasoline blends, engine geometries, and operating conditions. A coefficient of variance (COV) of gross indicated mean effective pressure (IMEP) correlation was also added to the parametric combustion model. This correlation enables the cycle combustion variation modeling as a function of engine geometry and operating conditions. The computed burn durations were then used to fit single and double Wiebe functions. The single-Wiebe parametric combustion compound used the least squares method to compute the single-Wiebe parameters, while the double-Wiebe parametric combustion compound used an analytical solution to compute the double-Wiebe parameters. These compounds were then integrated into the engine model in GT-Power through the multi-Wiebe combustion template in which the values of Wiebe parameters (single-Wiebe or double-Wiebe) were sensed via RLT-dependence. The parametric combustion models were validated by overlaying the simulated pressure trace from GT-Power on to experimentally measured pressure traces. A thermodynamic engine model was also developed to study the effect of fuel blends, engine geometries and operating conditions on both the burn durations and COV of gross IMEP simulation results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is practically only one method of gas analysis. This was worked out many years ago by Bunsen, Hempel, and Winkler and consists in the successive absorption with different chemicals of the various constituents of the gas. The only improvement to this method is the oxidation and combustion of different components of a mixture followed by absorption.