946 resultados para software failure prediction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study examines the shrinkage behaviour of residually derived black cotton (BC) soil and red soil compacted specimens that were subjected to air-drying from the swollen state. The soil specimens were compacted at varying dry density and moisture contents to simulate varied field conditions. The void ratio and moisture content of the swollen specimens were monitored during the drying process and relationship between them is analyzed. Shrinkage is represented as reduction in void ratio with decrease in water content of soil specimens. It is found to occur in three distinct stages. Total shrinkage magnitude depends on the type of clay mineral present. Variation in compaction conditions effect marginally total shrinkage magnitudes of BC soil specimens but have relatively more effect on red soil specimens. A linear relation is obtained between total shrinkage magnitude and volumetric water content of soil specimens in swollen state and can be used to predict the shrinkage magnitude of soils.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Free software is viewed as a revolutionary and subversive practice, and in particular has dealt a strong blow to the traditional conception of intellectual property law (although in its current form could be considered a 'hack' of IP rights). However, other (capitalist) areas of law have been swift to embrace free software, or at least incorporate it into its own tenets. One area in particular is that of competition (antitrust) law, which itself has long been in theoretical conflict with intellectual property, due to the restriction on competition inherent in the grant of ‘monopoly’ rights by copyrights, patents and trademarks. This contribution will examine how competition law has approached free software by examining instances in which courts have had to deal with such initiatives, for instance in the Oracle Sun Systems merger, and the implications that these decisions have on free software initiatives. The presence or absence of corporate involvement in initiatives will be an important factor in this investigation, with it being posited that true instances of ‘commons-based peer production’ can still subvert the capitalist system, including perplexing its laws beyond intellectual property.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates the predicted error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. In quantization phase, we used a modified SPIHT algorithm to achieve efficiency in memory requirements. The memory constraint plays a vital role in wireless and bandwidth-limited applications. A single reusable list is used instead of three continuously growing linked lists as in case of SPIHT. This method is error resilient. The performance is measured in terms of PSNR and memory requirements. The algorithm shows good compression performance and significant savings in memory. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The determination of the overconsolidation ratio (OCR) of clay deposits is an important task in geotechnical engineering practice. This paper examines the potential of a support vector machine (SVM) for predicting the OCR of clays from piezocone penetration test data. SVM is a statistical learning theory based on a structural risk minimization principle that minimizes both error and weight terms. The five input variables used for the SVM model for prediction of OCR are the corrected cone resistance (qt), vertical total stress (sigmav), hydrostatic pore pressure (u0), pore pressure at the cone tip (u1), and the pore pressure just above the cone base (u2). Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. From the sensitivity analysis, it is clear that qt=primary in situ data influenced by OCR followed by sigmav, u0, u2, and u1. Comparison between SVM and some of the traditional interpretation methods is also presented. The results of this study have shown that the SVM approach has the potential to be a practical tool for determination of OCR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We explore the fuse of information on co-occurrence of domains in multi-domain proteins in predicting protein-protein interactions. The basic premise of our work is the assumption that domains co-occurring in a polypeptide chain undergo either structural or functional interactions among themselves. In this study we use a template dataset of domains in multidomain proteins and predict protein-protein interactions in a target organism. We note that maximum number of correct predictions of interacting protein domain families (158) is made in S. cerevisiae when the dataset of closely related organisms is used as the template followed by the more diverse dataset of bacterial proteins (48) and a dataset of randomly chosen proteins (23). We conclude that use of multi-domain information from organisms closely-related to the target can aid prediction of interacting protein families.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: Glutathionyl haemoglobin (GS-Hb) belonging to the class of glutathionylated proteins has been investigated as a possible marker of oxidative stress in different chronic diseases. The purpose of this study was to examine whether glutathionyl haemoglobin can serve as an oxidative stress marker in non-diabetic chronic renal failure patients on different renal replacement therapies (RRT) through its quantitation, and characterization of the specific binding site of glutathione in haemoglobin molecule by mass spectrometric analysis. Design and methods: The study group consisted of non-diabetic chronic renal failure patients on renal replacement therapy (RRT): hemodialysis (HD), continuous ambulatory peritoneal dialysis (CAPD) and renal allograft transplant (Txp) patients. Haemoglobin samples of these subjects were analyzed by liquid chromatography electrospray ionization mass spectrometry for GS-Hb quantitation. Characterization of GS-Hb was done by tandem mass spectrometry. Levels of erythrocyte glutathione (GSH) and lipid peroxidation (as thiobarbituric acid reacting substances) were measured spectrophotometrically, while glycated baernoglobin (HbA1c) was measured by HPLC. Results: GS-Hb levels were markedly elevated in the dialysis group and marginally in the transplant group as compared to the controls. GS-Hb levels correlated positively with lipid peroxidation and negatively with the erythrocyte glutathione levels in RRT groups indicating enhanced oxidative stress. De novo sequencing of the chymotryptic fragment of GS-Hb established that glutathione is attached to Cys-93 of the beta globin chain. Mass spectrometric quantitation of total glycated haemoglobin showed good agreement with HbA1c estimation by conventional HPLC method. Conclusions: Glutathionyl haemoglobin can serve as a clinical marker of oxidative stress in chronic debilitating therapies like RRT. Mass spectrometry provides a reliable analytical tool for quantitation and residue level characterization of different post-translational modifications of haemoglobin. (c) 2007 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical weather prediction (NWP) models provide the basis for weather forecasting by simulating the evolution of the atmospheric state. A good forecast requires that the initial state of the atmosphere is known accurately, and that the NWP model is a realistic representation of the atmosphere. Data assimilation methods are used to produce initial conditions for NWP models. The NWP model background field, typically a short-range forecast, is updated with observations in a statistically optimal way. The objective in this thesis has been to develope methods in order to allow data assimilation of Doppler radar radial wind observations. The work has been carried out in the High Resolution Limited Area Model (HIRLAM) 3-dimensional variational data assimilation framework. Observation modelling is a key element in exploiting indirect observations of the model variables. In the radar radial wind observation modelling, the vertical model wind profile is interpolated to the observation location, and the projection of the model wind vector on the radar pulse path is calculated. The vertical broadening of the radar pulse volume, and the bending of the radar pulse path due to atmospheric conditions are taken into account. Radar radial wind observations are modelled within observation errors which consist of instrumental, modelling, and representativeness errors. Systematic and random modelling errors can be minimized by accurate observation modelling. The impact of the random part of the instrumental and representativeness errors can be decreased by calculating spatial averages from the raw observations. Model experiments indicate that the spatial averaging clearly improves the fit of the radial wind observations to the model in terms of observation minus model background (OmB) standard deviation. Monitoring the quality of the observations is an important aspect, especially when a new observation type is introduced into a data assimilation system. Calculating the bias for radial wind observations in a conventional way can result in zero even in case there are systematic differences in the wind speed and/or direction. A bias estimation method designed for this observation type is introduced in the thesis. Doppler radar radial wind observation modelling, together with the bias estimation method, enables the exploitation of the radial wind observations also for NWP model validation. The one-month model experiments performed with the HIRLAM model versions differing only in a surface stress parameterization detail indicate that the use of radar wind observations in NWP model validation is very beneficial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data assimilation provides an initial atmospheric state, called the analysis, for Numerical Weather Prediction (NWP). This analysis consists of pressure, temperature, wind, and humidity on a three-dimensional NWP model grid. Data assimilation blends meteorological observations with the NWP model in a statistically optimal way. The objective of this thesis is to describe methodological development carried out in order to allow data assimilation of ground-based measurements of the Global Positioning System (GPS) into the High Resolution Limited Area Model (HIRLAM) NWP system. Geodetic processing produces observations of tropospheric delay. These observations can be processed either for vertical columns at each GPS receiver station, or for the individual propagation paths of the microwave signals. These alternative processing methods result in Zenith Total Delay (ZTD) and Slant Delay (SD) observations, respectively. ZTD and SD observations are of use in the analysis of atmospheric humidity. A method is introduced for estimation of the horizontal error covariance of ZTD observations. The method makes use of observation minus model background (OmB) sequences of ZTD and conventional observations. It is demonstrated that the ZTD observation error covariance is relatively large in station separations shorter than 200 km, but non-zero covariances also appear at considerably larger station separations. The relatively low density of radiosonde observing stations limits the ability of the proposed estimation method to resolve the shortest length-scales of error covariance. SD observations are shown to contain a statistically significant signal on the asymmetry of the atmospheric humidity field. However, the asymmetric component of SD is found to be nearly always smaller than the standard deviation of the SD observation error. SD observation modelling is described in detail, and other issues relating to SD data assimilation are also discussed. These include the determination of error statistics, the tuning of observation quality control and allowing the taking into account of local observation error correlation. The experiments made show that the data assimilation system is able to retrieve the asymmetric information content of hypothetical SD observations at a single receiver station. Moreover, the impact of real SD observations on humidity analysis is comparable to that of other observing systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, a new flame extinction model based on the k/epsilon turbulence time scale concept is proposed to predict the flame liftoff heights over a wide range of coflow temperature and O-2 mass fraction of the coflow. The flame is assumed to be quenched, when the fluid time scale is less than the chemical time scale ( Da < 1). The chemical time scale is derived as a function of temperature, oxidizer mass fraction, fuel dilution, velocity of the jet and fuel type. The present extinction model has been tested for a variety of conditions: ( a) ambient coflow conditions ( 1 atm and 300 K) for propane, methane and hydrogen jet flames, ( b) highly preheated coflow, and ( c) high temperature and low oxidizer concentration coflow. Predicted flame liftoff heights of jet diffusion and partially premixed flames are in excellent agreement with the experimental data for all the simulated conditions and fuels. It is observed that flame stabilization occurs at a point near the stoichiometric mixture fraction surface, where the local flow velocity is equal to the local flame propagation speed. The present method is used to determine the chemical time scale for the conditions existing in the mild/ flameless combustion burners investigated by the authors earlier. This model has successfully predicted the initial premixing of the fuel with combustion products before the combustion reaction initiates. It has been inferred from these numerical simulations that fuel injection is followed by intense premixing with hot combustion products in the primary zone and combustion reaction follows further downstream. Reaction rate contours suggest that reaction takes place over a large volume and the magnitude of the combustion reaction is lower compared to the conventional combustion mode. The appearance of attached flames in the mild combustion burners at low thermal inputs is also predicted, which is due to lower average jet velocity and larger residence times in the near injection zone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fatigue crack propagation model for concrete is proposed based on the concepts of fracture mechanics. This model takes into account the loading history, frequency of applied load, and size, effect parameters. Using this model, a method is described based on linear elastic fracture mechanics to assess the residual strength of cracked plain and reinforced concrete (RC) beams. This could be used to predict the residual strength (load carrying capacity) of cracked or damaged plain and reinforced concrete beams at a given level of damage. It has been seen that the fatigue crack propagation rate increases as. the size of plain concrete, beam increases indicating an increase in brittleness. In reinforced concrete (RC) beams, the fracture process becomes stable only when the beam is sufficiently reinforced.