879 resultados para grade and tonnage models
Resumo:
Streptococcus pyogenes causes severe invasive infections: the post-streptococcal sequelae of acute rheumatic fever (RF) and rheumatic heart disease (RHD), acute glomerulonephritis, and uncomplicated pharyngitis and pyoderma. Efforts to produce a vaccine against S. pyogenes began several decades ago, and different models have been proposed. Here, we describe the methodology used in the development of a new vaccine model, consisting of both T and B protective epitopes constructed as synthetic peptides and recombinant proteins. Two adjuvants were tested in an experimental inbred mouse model: a classical Freund`s adjuvant and a new adjuvant (AFCo1) that induces mucosal immune responses and is obtained by calcium precipitation of a proteoliposome derived from the outer membrane of Neisseria meningitides B. The StreptInCor vaccine epitope co-administrated with AFCo1 adjuvant induced mucosal (IgA) and systemic (IgG) antibodies as preferential Th1-mediated immune responses. No autoimmune reactions were observed, suggesting that the vaccine epitope is safe. (c) 2009 Elsevier Inc. All rights reserved.
Resumo:
The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model, Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin er al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214-223]. The usefulness of these models is illustrated in a Simulation study and in applications to three real data sets. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
For the first time, we introduce a class of transformed symmetric models to extend the Box and Cox models to more general symmetric models. The new class of models includes all symmetric continuous distributions with a possible non-linear structure for the mean and enables the fitting of a wide range of models to several data types. The proposed methods offer more flexible alternatives to Box-Cox or other existing procedures. We derive a very simple iterative process for fitting these models by maximum likelihood, whereas a direct unconditional maximization would be more difficult. We give simple formulae to estimate the parameter that indexes the transformation of the response variable and the moments of the original dependent variable which generalize previous published results. We discuss inference on the model parameters. The usefulness of the new class of models is illustrated in one application to a real dataset.
Resumo:
We show that the S parameter is not finite in theories of electroweak symmetry breaking in a slice of anti-de Sitter five-dimensional space, with the light fermions localized in the ultraviolet. We compute the one-loop contributions to S from the Higgs sector and show that they are logarithmically dependent on the cutoff of the theory. We discuss the renormalization of S, as well as the implications for bounds from electroweak precision measurements on these models. We argue that, although in principle the choice of renormalization condition could eliminate the S parameter constraint, a more consistent condition would still result in a large and positive S. On the other hand, we show that the dependence on the Higgs mass in S can be entirely eliminated by the renormalization procedure, making it impossible in these theories to extract a Higgs mass bound from electroweak precision constraints.
Resumo:
This paper proposes a method to locate and track people by combining evidence from multiple cameras using the homography constraint. The proposed method use foreground pixels from simple background subtraction to compute evidence of the location of people on a reference ground plane. The algorithm computes the amount of support that basically corresponds to the ""foreground mass"" above each pixel. Therefore, pixels that correspond to ground points have more support. The support is normalized to compensate for perspective effects and accumulated on the reference plane for all camera views. The detection of people on the reference plane becomes a search for regions of local maxima in the accumulator. Many false positives are filtered by checking the visibility consistency of the detected candidates against all camera views. The remaining candidates are tracked using Kalman filters and appearance models. Experimental results using challenging data from PETS`06 show good performance of the method in the presence of severe occlusion. Ground truth data also confirms the robustness of the method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Given a fixed set of identical or different-sized circular items, the problem we deal with consists on finding the smallest object within which the items can be packed. Circular, triangular, squared, rectangular and also strip objects are considered. Moreover, 2D and 3D problems are treated. Twice-differentiable models for all these problems are presented. A strategy to reduce the complexity of evaluating the models is employed and, as a consequence, instances with a large number of items can be considered. Numerical experiments show the flexibility and reliability of the new unified approach. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
One of the main aims of this thesis is to design an optimized commercial Photovoltaic (PV) system in Barbados from several variables such as racking type, module type and inverter type based on practicality, technical performance as well as financial returns to the client. Detailed simulations are done in PVSYST and financial models are used to compare different systems and their viability. Once the preeminent system is determined from a financial and performance perspective a detailed design is done using PVSYST and AutoCAD to design the most optimal PV system for the customer. In doing so, suitable engineering drawings are generated which are detailed enough for construction of the system. Detailed cost with quotes from relevant manufacturers, suppliers and estimators become instrumental in determining Balance of System Costs in addition to total project cost. The final simulated system is suggested with a PV capacity of 425kW and an inverter output of 300kW resulting in an array oversizing of 1.42. The PV system has a weighted Performance Ratio of 77 %, a specific yield of 1467 kWh/kWp and a projected annual production of 624 MWh/yr. This system is estimated to offset approximately 28 % of Carlton’s electrical load annually. Over the course of 20 years the PV system is projected to produce electricity at a cost of $0.201USD/kWh which is significantly lower than the $0.35 USD/kWh paid to the utility at the time of writing this thesis. Due to the high cost of electricity on the island, an attractive Feed-In-Tariff is not necessary to warrant the installation of a commercial System which over a lifetime which produces electricity at less than 60% of the cost to the user purchasing electricity from the utility. A simple payback period of 5.4 years, a return on investment of 17 % without incentives, in addition to an estimated diversion of 6840 barrels of oil or 2168 tonnes of CO2 further provides compelling justification for the installation of a commercial Photovoltaic System not only on Carlton A-1 Supermarket, but also island wide as well as regionally where most electricity supplies are from imported fossil fuels.
Resumo:
One of the first questions to consider when designing a new roll forming line is the number of forming steps required to produce a profile. The number depends on material properties, the cross-section geometry and tolerance requirements, but the tool designer also wants to minimize the number of forming steps in order to reduce the investment costs for the customer. There are several computer aided engineering systems on the market that can assist the tool designing process. These include more or less simple formulas to predict deformation during forming as well as the number of forming steps. In recent years it has also become possible to use finite element analysis for the design of roll forming processes. The objective of the work presented in this thesis was to answer the following question: How should the roll forming process be designed for complex geometries and/or high strength steels? The work approach included both literature studies as well as experimental and modelling work. The experimental part gave direct insight into the process and was also used to develop and validate models of the process. Starting with simple geometries and standard steels the work progressed to more complex profiles of variable depth and width, made of high strength steels. The results obtained are published in seven papers appended to this thesis. In the first study (see paper 1) a finite element model for investigating the roll forming of a U-profile was built. It was used to investigate the effect on longitudinal peak membrane strain and deformation length when yield strength increases, see paper 2 and 3. The simulations showed that the peak strain decreases whereas the deformation length increases when the yield strength increases. The studies described in paper 4 and 5 measured roll load, roll torque, springback and strain history during the U-profile forming process. The measurement results were used to validate the finite element model in paper 1. The results presented in paper 6 shows that the formability of stainless steel (e.g. AISI 301), that in the cold rolled condition has a large martensite fraction, can be substantially increased by heating the bending zone. The heated area will then become austenitic and ductile before the roll forming. Thanks to the phenomenon of strain induced martensite formation, the steel will regain the martensite content and its strength during the subsequent plastic straining. Finally, a new tooling concept for profiles with variable cross-sections is presented in paper 7. The overall conclusions of the present work are that today, it is possible to successfully develop profiles of complex geometries (3D roll forming) in high strength steels and that finite element simulation can be a useful tool in the design of the roll forming process.
Resumo:
The gradual changes in the world development have brought energy issues back into high profile. An ongoing challenge for countries around the world is to balance the development gains against its effects on the environment. The energy management is the key factor of any sustainable development program. All the aspects of development in agriculture, power generation, social welfare and industry in Iran are crucially related to the energy and its revenue. Forecasting end-use natural gas consumption is an important Factor for efficient system operation and a basis for planning decisions. In this thesis, particle swarm optimization (PSO) used to forecast long run natural gas consumption in Iran. Gas consumption data in Iran for the previous 34 years is used to predict the consumption for the coming years. Four linear and nonlinear models proposed and six factors such as Gross Domestic Product (GDP), Population, National Income (NI), Temperature, Consumer Price Index (CPI) and yearly Natural Gas (NG) demand investigated.
Resumo:
This map is designed as a resource for students and the public to use and develop a better understanding of the trails system on the Colby Campus. I used a Garmin GPSmap 60CS to chart all the trails on Runnals Hill and in the Arboretum. Then, using ArcGIS, I compiled the tracked trails and laid them over an aerial photo of the campus. Because many of the trails are hard to find, I took digital photos of each trail entry to help the user locate them. Then, by taking note of the grade and width of the trail, I decided which trails were suitable for certain activities. This gives users an idea of where to go for walking, running, mountain biking, cross-country skiing, and snowshoeing.
Resumo:
This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
New business and technology platforms are required to sustainably manage urban water resources [1,2]. However, any proposed solutions must be cognisant of security, privacy and other factors that may inhibit adoption and hence impact. The FP7 WISDOM project (funded by the European Commission - GA 619795) aims to achieve a step change in water and energy savings via the integration of innovative Information and Communication Technologies (ICT) frameworks to optimize water distribution networks and to enable change in consumer behavior through innovative demand management and adaptive pricing schemes [1,2,3]. The WISDOM concept centres on the integration of water distribution, sensor monitoring and communication systems coupled with semantic modelling (using ontologies, potentially connected to BIM, to serve as intelligent linkages throughout the entire framework) and control capabilities to provide for near real-time management of urban water resources. Fundamental to this framework are the needs and operational requirements of users and stakeholders at domestic, corporate and city levels and this requires the interoperability of a number of demand and operational models, fed with data from diverse sources such as sensor networks and crowsourced information. This has implications regarding the provenance and trustworthiness of such data and how it can be used in not only the understanding of system and user behaviours, but more importantly in the real-time control of such systems. Adaptive and intelligent analytics will be used to produce decision support systems that will drive the ability to increase the variability of both supply and consumption [3]. This in turn paves the way for adaptive pricing incentives and a greater understanding of the water-energy nexus. This integration is complex and uncertain yet being typical of a cyber-physical system, and its relevance transcends the water resource management domain. The WISDOM framework will be modeled and simulated with initial testing at an experimental facility in France (AQUASIM – a full-scale test-bed facility to study sustainable water management), then deployed and evaluated in in two pilots in Cardiff (UK) and La Spezia (Italy). These demonstrators will evaluate the integrated concept providing insight for wider adoption.
Resumo:
In this paper, we test a version of the conditional CAPM with respect to a local market portfolio, proxied by the Brazilian stock index during the period 1976-1992. We also test a conditional APT modeI by using the difference between the 3-day rate (Cdb) and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. The conditional CAPM and APT models are estimated by the Generalized Method of Moments (GMM) and tested on a set of size portfolios created from individual securities exchanged on the Brazilian markets. The inclusion of this second factor proves to be important for the appropriate pricing of the portfolios.
Resumo:
This paper presents evidence on the key role of infrastructure in the Andean Community trade patterns. Three distinct but related gravity models of bilateral trade are used. The first model aims at identifying the importance of the Preferential Trade Agreement and adjacency on intra-regional trade, while also checking the traditional roles of economic size and distance. The second and third models also assess the evolution of the Trade Agreement and the importance of sharing a common border, but their main goal is to analyze the relevance of including infrastructure in the augmented gravity equation, testing the theoretical assumption that infrastructure endowments, by reducing trade and transport costs, reduce “distance” between bilateral partners. Indeed, if one accepts distance as a proxy for transportation costs, infrastructure development and improvement drastically modify it. Trade liberalization eliminates most of the distortions that a protectionist tariff system imposes on international business; hence transportation costs represent nowadays a considerably larger barrier to trade than in past decades. As new trade pacts are being negotiated in the Americas, borders and old agreements will lose significance; trade among countries will be nearly without restrictions, and bilateral flows will be defined in terms of costs and competitiveness. Competitiveness, however, will only be achieved by an improvement in infrastructure services at all points in the production-distribution chain.