936 resultados para Limitation of liability
Resumo:
Global hydrological models (GHMs) model the land surface hydrologic dynamics of continental-scale river basins. Here we describe one such GHM, the Macro-scale - Probability-Distributed Moisture model.09 (Mac-PDM.09). The model has undergone a number of revisions since it was last applied in the hydrological literature. This paper serves to provide a detailed description of the latest version of the model. The main revisions include the following: (1) the ability for the model to be run for n repetitions, which provides more robust estimates of extreme hydrological behaviour, (2) the ability of the model to use a gridded field of coefficient of variation (CV) of daily rainfall for the stochastic disaggregation of monthly precipitation to daily precipitation, and (3) the model can now be forced with daily input climate data as well as monthly input climate data. We demonstrate the effects that each of these three revisions has on simulated runoff relative to before the revisions were applied. Importantly, we show that when Mac-PDM.09 is forced with monthly input data, it results in a negative runoff bias relative to when daily forcings are applied, for regions of the globe where the day-to-day variability in relative humidity is high. The runoff bias can be up to - 80% for a small selection of catchments but the absolute magnitude of the bias may be small. As such, we recommend future applications of Mac-PDM.09 that use monthly climate forcings acknowledge the bias as a limitation of the model. The performance of Mac-PDM.09 is evaluated by validating simulated runoff against observed runoff for 50 catchments. We also present a sensitivity analysis that demonstrates that simulated runoff is considerably more sensitive to method of PE calculation than to perturbations in soil moisture and field capacity parameters.
Resumo:
Firms form consortia in order to win contracts. Once a project has been awarded to a consortium each member then concentrates on his or her own contract with the client. Therefore, consortia are marketing devices, which present the impression of teamworking, but the production process is just as fragmented as under conventional procurement methods. In this way, the consortium forms a barrier between the client and the actual construction production process. Firms form consortia, not as a simple development of normal ways of working, but because the circumstances for specific projects make it a necessary vehicle. These circumstances include projects that are too large or too complex to undertake alone or projects that require on-going services which cannot be provided by the individual firms inhouse. It is not a preferred way of working, because participants carry extra risk in the form of liability for the actions of their partners in the consortium. The behaviour of members of consortia is determined by their relative power, based on several factors, including financial commitment and ease of replacement. The level of supply chain visibility to the public sector client and to the industry is reduced by the existence of a consortium because the consortium forms an additional obstacle between the client and the firms undertaking the actual construction work. Supply chain visibility matters to the client who otherwise loses control over the process of construction or service provision, while remaining accountable for cost overruns. To overcome this separation there is a convincing argument in favour of adopting the approach put forward in the Project Partnering Contract 2000 (PPC2000) Agreement. Members of consortia do not necessarily go on to work in the same consortia again because members need to respond flexibly to opportunities as and when they arise. Decision-making processes within consortia tend to be on an ad hoc basis. Construction risk is taken by the contractor and the construction supply chain but the reputational risk is carried by all the firms associated with a consortium. There is a wide variation in the manner that consortia are formed, determined by the individual circumstances of each project; its requirements, size and complexity, and the attitude of individual project leaders. However, there are a number of close working relationships based on generic models of consortia-like arrangements for the purpose of building production, such as the Housing Corporation Guidance Notes and the PPC2000.
Resumo:
Many evolutionary algorithm applications involve either fitness functions with high time complexity or large dimensionality (hence very many fitness evaluations will typically be needed) or both. In such circumstances, there is a dire need to tune various features of the algorithm well so that performance and time savings are optimized. However, these are precisely the circumstances in which prior tuning is very costly in time and resources. There is hence a need for methods which enable fast prior tuning in such cases. We describe a candidate technique for this purpose, in which we model a landscape as a finite state machine, inferred from preliminary sampling runs. In prior algorithm-tuning trials, we can replace the 'real' landscape with the model, enabling extremely fast tuning, saving far more time than was required to infer the model. Preliminary results indicate much promise, though much work needs to be done to establish various aspects of the conditions under which it can be most beneficially used. A main limitation of the method as described here is a restriction to mutation-only algorithms, but there are various ways to address this and other limitations.
Resumo:
To test the effectiveness of stochastic single-chain models in describing the dynamics of entangled polymers, we systematically compare one such model; the slip-spring model; to a multichain model solved using stochastic molecular dynamics(MD) simulations (the Kremer-Grest model). The comparison involves investigating if the single-chain model can adequately describe both a microscopic dynamical and a macroscopic rheological quantity for a range of chain lengths. Choosing a particular chain length in the slip-spring model, the parameter values that best reproduce the mean-square displacement of a group of monomers is determined by fitting toMDdata. Using the same set of parameters we then test if the predictions of the mean-square displacements for other chain lengths agree with the MD calculations. We followed this by a comparison of the time dependent stress relaxation moduli obtained from the two models for a range of chain lengths. After identifying a limitation of the original slip-spring model in describing the static structure of the polymer chain as seen in MD, we remedy this by introducing a pairwise repulsive potential between the monomers in the chains. Poor agreement of the mean-square monomer displacements at short times can be rectified by the use of generalized Langevin equations for the dynamics and resulted in significantly improved agreement.
Resumo:
K-Means is a popular clustering algorithm which adopts an iterative refinement procedure to determine data partitions and to compute their associated centres of mass, called centroids. The straightforward implementation of the algorithm is often referred to as `brute force' since it computes a proximity measure from each data point to each centroid at every iteration of the K-Means process. Efficient implementations of the K-Means algorithm have been predominantly based on multi-dimensional binary search trees (KD-Trees). A combination of an efficient data structure and geometrical constraints allow to reduce the number of distance computations required at each iteration. In this work we present a general space partitioning approach for improving the efficiency and the scalability of the K-Means algorithm. We propose to adopt approximate hierarchical clustering methods to generate binary space partitioning trees in contrast to KD-Trees. In the experimental analysis, we have tested the performance of the proposed Binary Space Partitioning K-Means (BSP-KM) when a divisive clustering algorithm is used. We have carried out extensive experimental tests to compare the proposed approach to the one based on KD-Trees (KD-KM) in a wide range of the parameters space. BSP-KM is more scalable than KDKM, while keeping the deterministic nature of the `brute force' algorithm. In particular, the proposed space partitioning approach has shown to overcome the well-known limitation of KD-Trees in high-dimensional spaces and can also be adopted to improve the efficiency of other algorithms in which KD-Trees have been used.
Resumo:
The main limitation of linearization theory that prevents its application in practical problems is the need for an exact knowledge of the plant. This requirement is eliminated and it is shown that a multilayer network can synthesise the state feedback coefficients that linearize a nonlinear control affine plant. The stability of the linearizing closed loop can be guaranteed if the autonomous plant is asymptotically stable and the state feedback is bounded.
Resumo:
Global climate and weather models tend to produce rainfall that is too light and too regular over the tropical ocean. This is likely because of convective parametrizations, but the problem is not well understood. Here, distributions of precipitation rates are analyzed for high-resolution UK Met Office Unified Model simulations of a 10 day case study over a large tropical domain (∼20°S–20°N and 42°E–180°E). Simulations with 12 km grid length and parametrized convection have too many occurrences of light rain and too few of heavier rain when interpolated onto a 1° grid and compared with Tropical Rainfall Measuring Mission (TRMM) data. In fact, this version of the model appears to have a preferred scale of rainfall around 0.4 mm h−1 (10 mm day−1), unlike observations of tropical rainfall. On the other hand, 4 km grid length simulations with explicit convection produce distributions much more similar to TRMM observations. The apparent preferred scale at lighter rain rates seems to be a feature of the convective parametrization rather than the coarse resolution, as demonstrated by results from 12 km simulations with explicit convection and 40 km simulations with parametrized convection. In fact, coarser resolution models with explicit convection tend to have even more heavy rain than observed. Implications for models using convective parametrizations, including interactions of heating and moistening profiles with larger scales, are discussed. One important implication is that the explicit convection 4 km model has temperature and moisture tendencies that favour transitions in the convective regime. Also, the 12 km parametrized convection model produces a more stable temperature profile at its extreme high-precipitation range, which may reduce the chance of very heavy rainfall. Further study is needed to determine whether unrealistic precipitation distributions are due to some fundamental limitation of convective parametrizations or whether parametrizations can be improved, in order to better simulate these distributions.
Resumo:
Peat soils consist of poorly decomposed plant detritus, preserved by low decay rates, and deep peat deposits are globally significant stores in the carbon cycle. High water tables and low soil temperatures are commonly held to be the primary reasons for low peat decay rates. However, recent studies suggest a thermodynamic limit to peat decay, whereby the slow turnover of peat soil pore water may lead to high concentrations of phenols and dissolved inorganic carbon. In sufficient concentrations, these chemicals may slow or even halt microbial respiration, providing a negative feedback to peat decay. We document the analysis of a simple, one-dimensional theoretical model of peatland pore water residence time distributions (RTDs). The model suggests that broader, thicker peatlands may be more resilient to rapid decay caused by climate change because of slow pore water turnover in deep layers. Even shallow peat deposits may also be resilient to rapid decay if rainfall rates are low. However, the model suggests that even thick peatlands may be vulnerable to rapid decay under prolonged high rainfall rates, which may act to flush pore water with fresh rainwater. We also used the model to illustrate a particular limitation of the diplotelmic (i.e., acrotelm and catotelm) model of peatland structure. Model peatlands of contrasting hydraulic structure exhibited identical water tables but contrasting RTDs. These scenarios would be treated identically by diplotelmic models, although the thermodynamic limit suggests contrasting decay regimes. We therefore conclude that the diplotelmic model be discarded in favor of model schemes that consider continuous variation in peat properties and processes.
Resumo:
Drought characterisation is an intrinsically spatio-temporal problem. A limitation of previous approaches to characterisation is that they discard much of the spatio-temporal information by reducing events to a lower-order subspace. To address this, an explicit 3-dimensional (longitude, latitude, time) structure-based method is described in which drought events are defined by a spatially and temporarily coherent set of points displaying standardised precipitation below a given threshold. Geometric methods can then be used to measure similarity between individual drought structures. Groupings of these similarities provide an alternative to traditional methods for extracting recurrent space-time signals from geophysical data. The explicit consideration of structure encourages the construction of summary statistics which relate to the event geometry. Example measures considered are the event volume, centroid, and aspect ratio. The utility of a 3-dimensional approach is demonstrated by application to the analysis of European droughts (15 °W to 35°E, and 35 °N to 70°N) for the period 1901–2006. Large-scale structure is found to be abundant with 75 events identified lasting for more than 3 months and spanning at least 0.5 × 106 km2. Near-complete dissimilarity is seen between the individual drought structures, and little or no regularity is found in the time evolution of even the most spatially similar drought events. The spatial distribution of the event centroids and the time evolution of the geographic cross-sectional areas strongly suggest that large area, sustained droughts result from the combination of multiple small area (∼106 km2) short duration (∼3 months) events. The small events are not found to occur independently in space. This leads to the hypothesis that local water feedbacks play an important role in the aggregation process.
Resumo:
A perceived limitation of z-coordinate models associated with spurious diapycnal mixing in eddying, frontal flow, can be readily addressed through appropriate attention to the tracer advection schemes employed. It is demonstrated that tracer advection schemes developed by Prather and collaborators for application in the stratosphere, greatly improve the fidelity of eddying flows, reducing levels of spurious diapycnal mixing to below those directly measured in field experiments, ∼1 × 10−5 m2 s−1. This approach yields a model in which geostrophic eddies are quasi-adiabatic in the ocean interior, so that the residual-mean overturning circulation aligns almost perfectly with density contours. A reentrant channel configuration of the MIT General Circulation Model, that approximates the Antarctic Circumpolar Current, is used to examine these issues. Virtual analogs of ocean deliberate tracer release field experiments reinforce our conclusion, producing passive tracer solutions that parallel field experiments remarkably well.
Resumo:
Detailed understanding of the haemodynamic changes that underlie non-invasive neuroimaging techniques such as blood oxygen level dependent functional magnetic resonance imaging is essential if we are to continue to extend the use of these methods for understanding brain function and dysfunction. The use of animal and in particular rodent research models has been central to these endeavours as they allow in-vivo experimental techniques that provide measurements of the haemodynamic response function at high temporal and spatial resolution. A limitation of most of this research is the use of anaesthetic agents which may disrupt or mask important features of neurovascular coupling or the haemodynamic response function. In this study we therefore measured spatiotemporal cortical haemodynamic responses to somatosensory stimulation in awake rats using optical imaging spectroscopy. Trained, restrained animals received non-noxious stimulation of the whisker pad via chronically implanted stimulating microwires whilst optical recordings were made from the contralateral somatosensory cortex through a thin cranial window. The responses we measure from un-anaesthetised animals are substantially different from those reported in previous studies which have used anaesthetised animals. These differences include biphasic response regions (initial increases in blood volume and oxygenation followed by subsequent decreases) as well as oscillations in the response time series of awake animals. These haemodynamic response features do not reflect concomitant changes in the underlying neuronal activity and therefore reflect neurovascular or cerebrovascular processes. These hitherto unreported hyperemic response dynamics may have important implications for the use of anaesthetised animal models for research into the haemodynamic response function.
Resumo:
Digital elevation model (DEM) plays a substantial role in hydrological study, from understanding the catchment characteristics, setting up a hydrological model to mapping the flood risk in the region. Depending on the nature of study and its objectives, high resolution and reliable DEM is often desired to set up a sound hydrological model. However, such source of good DEM is not always available and it is generally high-priced. Obtained through radar based remote sensing, Shuttle Radar Topography Mission (SRTM) is a publicly available DEM with resolution of 92m outside US. It is a great source of DEM where no surveyed DEM is available. However, apart from the coarse resolution, SRTM suffers from inaccuracy especially on area with dense vegetation coverage due to the limitation of radar signals not penetrating through canopy. This will lead to the improper setup of the model as well as the erroneous mapping of flood risk. This paper attempts on improving SRTM dataset, using Normalised Difference Vegetation Index (NDVI), derived from Visible Red and Near Infra-Red band obtained from Landsat with resolution of 30m, and Artificial Neural Networks (ANN). The assessment of the improvement and the applicability of this method in hydrology would be highlighted and discussed.
Resumo:
Durante a análise sísmica de estruturas complexas, o modelo matemático empregado deveria incluir não só as distribuicões irregulares de massas e de rigidezes senão também à natureza tridimensional da ecitação sísmica. Na prática, o elevado número de graus de liberdade involucrado limita este tipo de análise à disponibilidade de grandes computadoras. Este trabalho apresenta um procedimento simplificado, para avaliar a amplificação do movimento sísmico em camadas de solos. Sua aplicação permitiria estabelecer critérios a partir dos quais avalia-se a necessidade de utilizar modelos de interação solo-estrutura mais complexos que os utilizados habitualmente. O procedimento proposto possui as seguientes características : A- Movimento rígido da rocha definido em termos de três componentes ortagonais. Direção de propagação vertical. B- A ecuação constitutiva do solo inclui as características de não linearidade, plasticidade, dependência da história da carga, dissipação de energia e variação de volume. C- O perfil de solos é dicretizado mediante um sistema de massas concentradas. Utiliza-se uma formulação incremental das equações de movimento com integração directa no domínio do tempo. As propriedades pseudo-elásticas do solo são avaliadas em cada intervalo de integração, em função do estado de tensões resultante da acção simultânea das três componentes da excitação. O correcto funcionamento do procedimento proposto é verificado mediante análises unidimensionais (excitação horizontal) incluindo estudos comparativos com as soluções apresentadas por diversos autores. Similarmente apresentam-se análises tridimensionais (acção simultânea das três componentes da excitação considerando registros sísmicos reais. Analisa-se a influência que possui a dimensão da análise (uma análise tridimensional frente a três análises unidimensionais) na resposta de camadas de solos submetidos a diferentes níveis de exçitação; isto é, a limitação do Princípio de Superposisão de Efeitos.
Resumo:
This work is analyzing the challenges which the National Petrol Agency is facing to regulate the Petrol industry in Brazil after the Monopoly crash in the period between 1997 until 2005. Due to the necessities of adaptation of its political strategies to the rules which determine the international economic flows, Brazil was forced to use the Economic Regulation in order to control the market. The regulation established in Brazil is not indifferent to imperfect markets. Thus can be find a conflict of interests among companies, the government and consumers within this process of regulation. The established agency does not have enough autonomy for administrating a regulation. The State with its paternalism power does not allow the agency to fulfill its function for which it was established, even though its function was established by law. A regulating policy which is clearly defined will establish a strong and independent agency with a clear limitation of its competences, avoiding divergent interpretation which prioritizes investments and promotes economic development. The agency will have the challenge to regulate the companies that enter the sector, allowing the opening of the market for new initiatives of investments which contribute to the welfare of the country and breaking at the same time the monopoly that is lead by Petrobras since 1953. Combining a stable set of rules with agility in order to adapt to changes will provide the regulator with a great decision-making power. The flexibility in the regulation will improve the correcting of the rules that were set in the beginning, being more efficient, which are based on acquired experience and achieved results. The structure of the agency and the flexibility of the regulation should be orientated on the promotion of competition in order to achieve economic and social development.
Resumo:
The present Study of Case has an objective to describe the development of the Project for the opening of a new unit of a bilingual educational organization in the west zone of Rio de Janeiro. The study of case shows the reasons for the growth of the bilingual educational institutions, growth of the west zone and the decision of the opening of a new unit of the School in question in this specific area. The School in study solicited for its identity to be maintained undisclosed. It opted for the realization of a study of case to describe the development of the project, in a systemized way, through a bibliography and documental research. The limitation of this case is based on the fact of being a single study case. This project can be generalized for other projects within the same institution or replicated to other educational institutions, even if it¿s not bilingual.