928 resultados para structure, analysis, modeling
Resumo:
The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.
Resumo:
The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.
Resumo:
The purpose of this paper is twofold. Firstly it presents a preliminary and ethnomethodologically-informed analysis of the way in which the growing structure of a particular program's code was ongoingly derived from its earliest stages. This was motivated by an interest in how the detailed structure of completed program `emerged from nothing' as a product of the concrete practices of the programmer within the framework afforded by the language. The analysis is broken down into three sections that discuss: the beginnings of the program's structure; the incremental development of structure; and finally the code productions that constitute the structure and the importance of the programmer's stock of knowledge. The discussion attempts to understand and describe the emerging structure of code rather than focus on generating `requirements' for supporting the production of that structure. Due to time and space constraints, however, only a relatively cursory examination of these features was possible. Secondly the paper presents some thoughts on the difficulties associated with the analytic---in particular ethnographic---study of code, drawing on general problems as well as issues arising from the difficulties and failings encountered as part of the analysis presented in the first section.
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
Crop models are simplified mathematical representations of the interacting biological and environmental components of the dynamic soil–plant–environment system. Sorghum crop modeling has evolved in parallel with crop modeling capability in general, since its origins in the 1960s and 1970s. Here we briefly review the trajectory in sorghum crop modeling leading to the development of advanced models. We then (i) overview the structure and function of the sorghum model in the Agricultural Production System sIMulator (APSIM) to exemplify advanced modeling concepts that suit both agronomic and breeding applications, (ii) review an example of use of sorghum modeling in supporting agronomic management decisions, (iii) review an example of the use of sorghum modeling in plant breeding, and (iv) consider implications for future roles of sorghum crop modeling. Modeling and simulation provide an avenue to explore consequences of crop management decision options in situations confronted with risks associated with seasonal climate uncertainties. Here we consider the possibility of manipulating planting configuration and density in sorghum as a means to manipulate the productivity–risk trade-off. A simulation analysis of decision options is presented and avenues for its use with decision-makers discussed. Modeling and simulation also provide opportunities to improve breeding efficiency by either dissecting complex traits to more amenable targets for genetics and breeding, or by trait evaluation via phenotypic prediction in target production regions to help prioritize effort and assess breeding strategies. Here we consider studies on the stay-green trait in sorghum, which confers yield advantage in water-limited situations, to exemplify both aspects. The possible future roles of sorghum modeling in agronomy and breeding are discussed as are opportunities related to their synergistic interaction. The potential to add significant value to the revolution in plant breeding associated with genomic technologies is identified as the new modeling frontier.
Resumo:
O fogo é um processo frequente nas paisagens do norte de Portugal. Estudos anteriores mostraram que os bosques de azinheira (Quercus rotundifolia) persistem após a passagem do fogo e ajudam a diminuir a sua intensidade e taxa de propagação. Os principais objetivos deste estudo foram compreender e modelar o efeito dos bosques de azinheira no comportamento do fogo ao nível da paisagem da bacia superior do rio Sabor, localizado no nordeste de Portugal. O impacto dos bosques de azinheira no comportamento do fogo foi testado em termos de área e configuração de acordo com cenários que simulam a possível distribuição destas unidades de vegetação na paisagem, considerando uma percentagem de ocupação da azinheira de 2.2% (Low), 18.1% (Moderate), 26.0% (High), e 39.8% (Rivers). Estes cenários tiveram como principal objetivo testar 1) o papel dos bosques de azinheira no comportamento do fogo e 2) de que forma a configuração das manchas de azinheira podem ajudar a diminuir a intensidade da linha de fogo e área ardida. Na modelação do comportamento do fogo foi usado o modelo FlamMap para simular a intensidade de linha do fogo e taxa de propagação do fogo com base em modelos de combustível associados a cada ocupação e uso do solo presente na área de estudo, e também com base em fatores topográficos (altitude, declive e orientação da encosta) e climáticos (humidade e velocidade do vento). Foram ainda usados dois modelos de combustível para a ocupação de azinheira (áreas interiores e de bordadura), desenvolvidos com base em dados reais obtidos na região. Usou-se o software FRAGSATS para a análise dos padrões espaciais das classes de intensidade de linha do fogo, usando-se as métricas Class Area (CA), Number of Patches (NP) e Large Patches Index (LPI). Os resultados obtidos indicaram que a intensidade da linha de fogo e a taxa de propagação do fogo variou entre cenários e entre modelos de combustível para o azinhal. A intensidade média da linha de fogo e a taxa média de propagação do fogo decresceu à medida que a percentagem de área de bosques de azinheira aumentou na paisagem. Também foi observado que as métricas CA, NP e LPI variaram entre cenários e modelos de combustível para o azinhal, decrescendo quando a percentagem de área de bosques de azinheira aumentou. Este estudo permitiu concluir que a variação da percentagem de ocupação e configuração espacial dos bosques de azinheira influenciam o comportamento do fogo, reduzindo, em termos médios, a intensidade da linha de fogo e a taxa de propagação, sugerindo que os bosques de azinhal podem ser usados como medidas silvícolas preventivas para diminuir o risco de incêndio nesta região.
Resumo:
The Theoretical and Experimental Tomography in the Sea Experiment (THETIS 1) took place in the Gulf of Lion to observe the evolution of the temperature field and the process of deep convection during the 1991-1992 winter. The temperature measurements consist, of moored sensors, conductivity-temperature-depth and expendable bathythermograph surveys, ana acoustic tomography. Because of this diverse data set and since the field evolves rather fast, the analysis uses a unified framework, based on estimation theory and implementing a Kalman filter. The resolution and the errors associated with the model are systematically estimated. Temperature is a good tracer of water masses. The time-evolving three-dimensional view of the field resulting from the analysis shows the details of the three classical convection phases: preconditioning, vigourous convection, and relaxation. In all phases, there is strong spatial nonuniformity, with mesoscale activity, short timescales, and sporadic evidence of advective events (surface capping, intrusions of Levantine Intermediate Water (LIW)). Deep convection, reaching 1500 m, was observed in late February; by late April the field had not yet returned to its initial conditions (strong deficit of LIW). Comparison with available atmospheric flux data shows that advection acts to delay the occurence of convection and confirms the essential role of buoyancy fluxes. For this winter, the deep. mixing results in an injection of anomalously warm water (Delta T similar or equal to 0.03 degrees) to a depth of 1500 m, compatible with the deep warming previously reported.
Resumo:
For derived flood frequency analysis based on hydrological modelling long continuous precipitation time series with high temporal resolution are needed. Often, the observation network with recording rainfall gauges is poor, especially regarding the limited length of the available rainfall time series. Stochastic precipitation synthesis is a good alternative either to extend or to regionalise rainfall series to provide adequate input for long-term rainfall-runoff modelling with subsequent estimation of design floods. Here, a new two step procedure for stochastic synthesis of continuous hourly space-time rainfall is proposed and tested for the extension of short observed precipitation time series. First, a single-site alternating renewal model is presented to simulate independent hourly precipitation time series for several locations. The alternating renewal model describes wet spell durations, dry spell durations and wet spell intensities using univariate frequency distributions separately for two seasons. The dependence between wet spell intensity and duration is accounted for by 2-copulas. For disaggregation of the wet spells into hourly intensities a predefined profile is used. In the second step a multi-site resampling procedure is applied on the synthetic point rainfall event series to reproduce the spatial dependence structure of rainfall. Resampling is carried out successively on all synthetic event series using simulated annealing with an objective function considering three bivariate spatial rainfall characteristics. In a case study synthetic precipitation is generated for some locations with short observation records in two mesoscale catchments of the Bode river basin located in northern Germany. The synthetic rainfall data are then applied for derived flood frequency analysis using the hydrological model HEC-HMS. The results show good performance in reproducing average and extreme rainfall characteristics as well as in reproducing observed flood frequencies. The presented model has the potential to be used for ungauged locations through regionalisation of the model parameters.
A new analysis of hydrographic data in the Atlantic and its application to an inverse modeling study
Resumo:
Part 20: Health and Care Networks
Resumo:
This paper is concerned with a stochastic SIR (susceptible-infective-removed) model for the spread of an epidemic amongst a population of individuals, with a random network of social contacts, that is also partitioned into households. The behaviour of the model as the population size tends to infinity in an appropriate fashion is investigated. A threshold parameter which determines whether or not an epidemic with few initial infectives can become established and lead to a major outbreak is obtained, as are the probability that a major outbreak occurs and the expected proportion of the population that are ultimately infected by such an outbreak, together with methods for calculating these quantities. Monte Carlo simulations demonstrate that these asymptotic quantities accurately reflect the behaviour of finite populations, even for only moderately sized finite populations. The model is compared and contrasted with related models previously studied in the literature. The effects of the amount of clustering present in the overall population structure and the infectious period distribution on the outcomes of the model are also explored.
Resumo:
116 p.
Resumo:
Determination of combustion metrics for a diesel engine has the potential of providing feedback for closed-loop combustion phasing control to meet current and upcoming emission and fuel consumption regulations. This thesis focused on the estimation of combustion metrics including start of combustion (SOC), crank angle location of 50% cumulative heat release (CA50), peak pressure crank angle location (PPCL), and peak pressure amplitude (PPA), peak apparent heat release rate crank angle location (PACL), mean absolute pressure error (MAPE), and peak apparent heat release rate amplitude (PAA). In-cylinder pressure has been used in the laboratory as the primary mechanism for characterization of combustion rates and more recently in-cylinder pressure has been used in series production vehicles for feedback control. However, the intrusive measurement with the in-cylinder pressure sensor is expensive and requires special mounting process and engine structure modification. As an alternative method, this work investigated block mounted accelerometers to estimate combustion metrics in a 9L I6 diesel engine. So the transfer path between the accelerometer signal and the in-cylinder pressure signal needs to be modeled. Depending on the transfer path, the in-cylinder pressure signal and the combustion metrics can be accurately estimated - recovered from accelerometer signals. The method and applicability for determining the transfer path is critical in utilizing an accelerometer(s) for feedback. Single-input single-output (SISO) frequency response function (FRF) is the most common transfer path model; however, it is shown here to have low robustness for varying engine operating conditions. This thesis examines mechanisms to improve the robustness of FRF for combustion metrics estimation. First, an adaptation process based on the particle swarm optimization algorithm was developed and added to the single-input single-output model. Second, a multiple-input single-output (MISO) FRF model coupled with principal component analysis and an offset compensation process was investigated and applied. Improvement of the FRF robustness was achieved based on these two approaches. Furthermore a neural network as a nonlinear model of the transfer path between the accelerometer signal and the apparent heat release rate was also investigated. Transfer path between the acoustical emissions and the in-cylinder pressure signal was also investigated in this dissertation on a high pressure common rail (HPCR) 1.9L TDI diesel engine. The acoustical emissions are an important factor in the powertrain development process. In this part of the research a transfer path was developed between the two and then used to predict the engine noise level with the measured in-cylinder pressure as the input. Three methods for transfer path modeling were applied and the method based on the cepstral smoothing technique led to the most accurate results with averaged estimation errors of 2 dBA and a root mean square error of 1.5dBA. Finally, a linear model for engine noise level estimation was proposed with the in-cylinder pressure signal and the engine speed as components.
Resumo:
Geography has almost become obsolete. The world’s goods and services can now be accessed instantaneously by electronic commerce. Small and medium sized countries have felt the cold winds of change blowing, and have adopted the “safety in numbers” philosophy. Regional organisations throughout the world have sprung up, with their original raison d'être the encouragement and development of regional trading blocks. Two of the most developed regional groupings are the EU/EC and NAFTA. These two organisations represent two quite different philosophies of regional trade groupings, with contrasting legal structures. The advent of Trade Globalisation, with the founding of the WTO has brought these two approaches into confrontation, as each side of the Atlantic Ocean tries to influence the development on the naissant WTO. This paper examines the two contrasting legal structures, and the conflict on an inter regional level that they are engendering.
Resumo:
Blast is a major disease of rice in Brazil, the largest rice-producing country outside Asia. This study aimed to assess the genetic structure and mating-type frequency in a contemporary Pyricularia oryzae population, which caused widespread epidemics during the 2012/13 season in the Brazilian lowland subtropical region. Symptomatic leaves and panicles were sampled at flooded rice fields in the states of Rio Grande do Sul (RS, 34 fields) and Santa Catarina (SC, 21 fields). The polymorphism at ten simple sequence repeats (SSR or microsatellite) loci and the presence of MAT1-1 or MAT1-2 idiomorphs were assessed in a population comprised of 187 isolates. Only the MAT1-2 idiomorph was found and 162 genotypes were identified by the SSR analysis. A discriminant analysis of principal components (DAPC) of SSR data resolved four genetic groups, which were strongly associated with the cultivar of origin of the isolates. There was high level of genotypic diversity and moderate level of gene diversity regardless whether isolates were grouped in subpopulations based on geographic region, cultivar host or cultivar within region. While regional subpopulations were weakly differentiated, high genetic differentiation was found among subpopulations comprised of isolates from different cultivars. The data suggest that the rice blast pathogen population in southern Brazil is comprised of clonal lineages that are adapting to specific cultivar hosts. Farmers should avoid the use of susceptible cultivars over large areas and breeders should focus at enlarging the genetic basis of new cultivars.