859 resultados para Analytical Model
Resumo:
Although there was substantial research into the occupational health and safety sector over the past forty years, this generally focused on statistical analyses of data related to costs and/or fatalities and injuries. There is a lack of mathematical modelling of the interactions between workers and the resulting safety dynamics of the workplace. There is also little work investigating the potential impact of different safety intervention programs prior to their implementation. In this article, we present a fundamental, differential equation-based model of workplace safety that treats worker safety habits similarly to an infectious disease in an epidemic model. Analytical results for the model, derived via phase plane and stability analysis, are discussed. The model is coupled with a model of a generic safety strategy aimed at minimising unsafe work habits, to produce an optimal control problem. The optimal control model is solved using the forward-backward sweep numerical scheme implemented in Matlab.
Resumo:
The value of information technology (IT) is often realized when continuously being used after users’ initial acceptance. However, previous research on continuing IT usage is limited for dismissing the importance of mental goals in directing users’ behaviors and for inadequately accommodating the group context of users. This in-progress paper offers a synthesis of several literature to conceptualize continuing IT usage as multilevel constructs and to view IT usage behavior as directed and energized by a set of mental goals. Drawing from the self-regulation theory in the social psychology, this paper proposes a process model, positioning continuing IT usage as multiple-goal pursuit. An agent-based modeling approach is suggested to further explore causal and analytical implications of the proposed process model.
Resumo:
In the context of the first-year university classroom, this paper develops Vygotsky’s claim that ‘the relations between the higher mental functions were at one time real relations between people’. By taking the main horizontal and hierarchical levels of classroom discourse and dialogue (student-student, student-teacher, teacher-teacher) and marrying these with the possibilities opened up by Laurillard’s conversational framework, we argue that the learning challenge of a ‘troublesome’ threshold concept might be met by a carefully designed sequence of teaching events and experiences for first year students, and we provide a number of strategies that exploit each level of these ‘hierarchies of discourse’. We suggest that an analytical approach to classroom design that embodies these levels of discourse in sequenced dialogic methods could be used by teachers as a strategy to interrogate and adjust teaching-in-practice especially in the first year of university study.
Resumo:
A major challenge in studying coupled groundwater and surface-water interactions arises from the considerable difference in the response time scales of groundwater and surface-water systems affected by external forcings. Although coupled models representing the interaction of groundwater and surface-water systems have been studied for over a century, most have focused on groundwater quantity or quality issues rather than response time. In this study, we present an analytical framework, based on the concept of mean action time (MAT), to estimate the time scale required for groundwater systems to respond to changes in surface-water conditions. MAT can be used to estimate the transient response time scale by analyzing the governing mathematical model. This framework does not require any form of transient solution (either numerical or analytical) to the governing equation, yet it provides a closed form mathematical relationship for the response time as a function of the aquifer geometry, boundary conditions, and flow parameters. Our analysis indicates that aquifer systems have three fundamental time scales: (i) a time scale that depends on the intrinsic properties of the aquifer; (ii) a time scale that depends on the intrinsic properties of the boundary condition, and; (iii) a time scale that depends on the properties of the entire system. We discuss two practical scenarios where MAT estimates provide useful insights and we test the MAT predictions using new laboratory-scale experimental data sets.
Resumo:
We present a rigorous validation of the analyticalAmadei solution for the stress concentration around arbitrarily orientated borehole in general anisotropic elastic media. First, we revisit the theoretical framework of the Amadei solution and present analytical insights that show that the solution does indeed contain all special cases of symmetry, contrary to previous understanding, provided that the reduced strain coefficients β11 and β55 are not equal. It is shown from theoretical considerations and published experimental data that the β11 and β55 are not equal for realistic rocks. Second, we develop a 3D finite-element elastic model within a hybrid analyticalnumerical workflow that circumvents the need to rebuild and remesh the model for every borehole and material orientation. Third, we show that the borehole stresses computed from the numerical model and the analytical solution match almost perfectly for different borehole orientations (vertical, deviated and horizontal) and for several cases involving isotropic and transverse isotropic symmetries. It is concluded that the analytical Amadei solution is valid with no restrictions on the borehole orientation or elastic anisotropy symmetry.
Resumo:
Accurate modelling of automotive occupant posture is strongly related to the mechanical interaction between human body soft tissue and flexible seat components. This paper presents a finite-element study simulating the deflection of seat cushion foam and supportive seat structures, as well as human buttock and thigh soft tissue when seated. The thigh-buttock surface shell model was based on 95th percentile male subject scan data and made of two layers, covering thin to moderate thigh and buttock proportions. To replicate the effects of skin and fat, the neoprene rubber layer was modelled as a hyperelastic material with viscoelastic behaviour. The analytical seat model is based on a Ford production seat. The result of the finite-element indentation simulation is compared to a previous simulation of an indentation with a hard shell human model of equal geometry, and to the physical indentation result. We conclude that SAE composite buttock form and human-seat indentation of a suspended seat cushion can be validly simulated.
Resumo:
The project applied analytical facilities to characterize the composition and mechanical properties of osteoporosis in maxillary bone using an ovariectomized rat model. It was found that osteoporotic jaw bone contained different amount of trace elements in comparison with the normal bone, which plays a significant role in bone quality. The knowledge generated from the study will assist the treatment of jaw bone fracture and dental implant placement.
Resumo:
A plane strain elastic interaction analysis of a strip footing resting on a reinforced soil bed has been made by using a combined analytical and finite element method (FEM). In this approach the stiffness matrix for the footing has been obtained using the FEM, For the reinforced soil bed (halfplane) the stiffness matrix has been obtained using an analytical solution. For the latter, the reinforced zone has been idealised as (i) an equivalent orthotropic infinite strip (composite approach) and (ii) a multilayered system (discrete approach). In the analysis, the interface between the strip footing and reinforced halfplane has been assumed as (i) frictionless and (ii) fully bonded. The contact pressure distribution and the settlement reduction have been given for different depths of footing and scheme of reinforcement in soil. The load-deformation behaviour of the reinforced soil obtained using the above modelling has been compared with some available analytical and model test results. The equivalent orthotropic approach proposed in this paper is easy to program and is shown to predict the reinforcing effects reasonably well.
Resumo:
Regenerable 'gel-coated' cationic resins with fast sorption kinetics and high sorption capacity have application potential for removal of trace metal ions even in large-scale operations. Poly(acrylic acid) has been gel-coated on high-surface area silica (pre-coated with ethylene-vinyl acetate copolymer providing a thin barrier layer) and insolubilized by crosslinking with a low-molecular-weight diepoxide (epoxy equivalent 180 g) in the presence of benzyl dimethylamine catalyst at 70 degrees C, In experiments performed for Ca2+ sorption from dilute aqueous solutions of Ca(NO,),, the gel-coated acrylic resin is found to have nearly 40% higher sorption capacity than the bead-form commercial methacrylic resin Amberlite IRC-50 and also several limes higher rate of sorption. The sorption on the gel-coated sorbent under vigorous agitation has the characteristics of particle diffusion control with homogeneous (gel) diffusion in resin phase. A new mathematical model is proposed for such sorption on gel-coated ion-exchange resin in finite bath and solved by applying operator-theoretic methods. The analytical solution so obtained shows goad agreement with experimental sorption kinetics at relatively low levels (< 70%) of resin conversion.
Resumo:
We propose an exactly solvable model for the two-state curve-crossing problem. Our model assumes the coupling to be a delta function. It is used to calculate the effect of curve crossing on the electronic absorption spectrum and the resonance Raman excitation profile.
Resumo:
In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms
Resumo:
It is shown that a leaky aquifer model can be used for well field analysis in hard rock areas, treating the upper weathered and clayey layers as a composite unconfined aquitard overlying a deeper fractured aquifer. Two long-duration pump test studies are reported in granitic and schist regions in the Vedavati river basin. The validity of simplifications in the analytical solution is verified by finite difference computations.
Resumo:
We use Bayesian model selection techniques to test extensions of the standard flat LambdaCDM paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. The Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Besides, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent CMB, SNIa and BAO data, we find inconclusive evidence between flat LambdaCDM and simple dark-energy models. A curved Universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r=0.
Resumo:
BACKGROUND: In order to rapidly and efficiently screen potential biofuel feedstock candidates for quintessential traits, robust high-throughput analytical techniques must be developed and honed. The traditional methods of measuring lignin syringyl/guaiacyl (S/G) ratio can be laborious, involve hazardous reagents, and/or be destructive. Vibrational spectroscopy can furnish high-throughput instrumentation without the limitations of the traditional techniques. Spectral data from mid-infrared, near-infrared, and Raman spectroscopies was combined with S/G ratios, obtained using pyrolysis molecular beam mass spectrometry, from 245 different eucalypt and Acacia trees across 17 species. Iterations of spectral processing allowed the assembly of robust predictive models using partial least squares (PLS). RESULTS: The PLS models were rigorously evaluated using three different randomly generated calibration and validation sets for each spectral processing approach. Root mean standard errors of prediction for validation sets were lowest for models comprised of Raman (0.13 to 0.16) and mid-infrared (0.13 to 0.15) spectral data, while near-infrared spectroscopy led to more erroneous predictions (0.18 to 0.21). Correlation coefficients (r) for the validation sets followed a similar pattern: Raman (0.89 to 0.91), mid-infrared (0.87 to 0.91), and near-infrared (0.79 to 0.82). These statistics signify that Raman and mid-infrared spectroscopy led to the most accurate predictions of S/G ratio in a diverse consortium of feedstocks. CONCLUSION: Eucalypts present an attractive option for biofuel and biochemical production. Given the assortment of over 900 different species of Eucalyptus and Corymbia, in addition to various species of Acacia, it is necessary to isolate those possessing ideal biofuel traits. This research has demonstrated the validity of vibrational spectroscopy to efficiently partition different potential biofuel feedstocks according to lignin S/G ratio, significantly reducing experiment and analysis time and expense while providing non-destructive, accurate, global, predictive models encompassing a diverse array of feedstocks.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.