949 resultados para relaxed optimization models
Resumo:
Time-domain models of marine structures based on frequency domain data are usually built upon the Cummins equation. This type of model is a vector integro-differential equation which involves convolution terms. These convolution terms are not convenient for analysis and design of motion control systems. In addition, these models are not efficient with respect to simulation time, and ease of implementation in standard simulation packages. For these reasons, different methods have been proposed in the literature as approximate alternative representations of the convolutions. Because the convolution is a linear operation, different approaches can be followed to obtain an approximately equivalent linear system in the form of either transfer function or state-space models. This process involves the use of system identification, and several options are available depending on how the identification problem is posed. This raises the question whether one method is better than the others. This paper therefore has three objectives. The first objective is to revisit some of the methods for replacing the convolutions, which have been reported in different areas of analysis of marine systems: hydrodynamics, wave energy conversion, and motion control systems. The second objective is to compare the different methods in terms of complexity and performance. For this purpose, a model for the response in the vertical plane of a modern containership is considered. The third objective is to describe the implementation of the resulting model in the standard simulation environment Matlab/Simulink.
Resumo:
The motion of marine vessels has traditionally been studied using two different approaches: manoeuvring and seakeeping. These two approaches use different reference frames and coordinate systems to describe the motion. This paper derives the kinematic models that characterize the transformation of motion variables (position, velocity, accelerations) and forces between the different coordinate systems used in these theories. The derivations hereby presented are done in terms of the formalism adopted in robotics. The advantage of this formulation is the use of matrix notation and operations. As an application, the transformation of linear equations of motion used in seakeeping into body-fixed coordinates is considered for both zero and forward speed.
Resumo:
In vivo osteochondral defect models predominantly consist of small animals, such as rabbits. Although they have an advantage of low cost and manageability, their joints are smaller and more easily healed compared with larger animals or humans. We hypothesized that osteochondral cores from large animals can be implanted subcutaneously in rats to create an ectopic osteochondral defect model for routine and high-throughput screening of multiphasic scaffold designs and/or tissue-engineered constructs (TECs). Bovine osteochondral plugs with 4 mm diameter osteochondral defect were fitted with novel multiphasic osteochondral grafts composed of chondrocyte-seeded alginate gels and osteoblast-seeded polycaprolactone scaffolds, prior to being implanted in rats subcutaneously with bone morphogenic protein-7. After 12 weeks of in vivo implantation, histological and micro-computed tomography analyses demonstrated that TECs are susceptible to mineralization. Additionally, there was limited bone formation in the scaffold. These results suggest that the current model requires optimization to facilitate robust bone regeneration and vascular infiltration into the defect site. Taken together, this study provides a proof-of-concept for a high-throughput osteochondral defect model. With further optimization, the presented hybrid in vivo model may address the growing need for a cost-effective way to screen osteochondral repair strategies before moving to large animal preclinical trials.
Resumo:
The operation of the law rests on the selection of an account of the facts. Whether this involves prediction or postdiction, it is not possible to achieve certainty. Any attempt to model the operation of the law completely will therefore raise questions of how to model the process of proof. In the selection of a model a crucial question will be whether the model is to be used normatively or descriptively. Focussing on postdiction, this paper presents and contrasts the mathematical model with the story model. The former carries the normative stamp of scientific approval, whereas the latter has been developed by experimental psychologists to describe how humans reason. Neil Cohen's attempt to use a mathematical model descriptively provides an illustration of the dangers in not clearly setting this parameter of the modelling process. It should be kept in mind that the labels 'normative' and 'descriptive' are not eternal. The mathematical model has its normative limits, beyond which we may need to critically assess models with descriptive origins.
Resumo:
This paper presents a method for the estimation of thrust model parameters of uninhabited airborne systems using specific flight tests. Particular tests are proposed to simplify the estimation. The proposed estimation method is based on three steps. The first step uses a regression model in which the thrust is assumed constant. This allows us to obtain biased initial estimates of the aerodynamic coeficients of the surge model. In the second step, a robust nonlinear state estimator is implemented using the initial parameter estimates, and the model is augmented by considering the thrust as random walk. In the third step, the estimate of the thrust obtained by the observer is used to fit a polynomial model in terms of the propeller advanced ratio. We consider a numerical example based on Monte-Carlo simulations to quantify the sampling properties of the proposed estimator given realistic flight conditions.
Resumo:
This paper presents the application of a statistical method for model structure selection of lift-drag and viscous damping components in ship manoeuvring models. The damping model is posed as a family of linear stochastic models, which is postulated based on previous work in the literature. Then a nested test of hypothesis problem is considered. The testing reduces to a recursive comparison of two competing models, for which optimal tests in the Neyman sense exist. The method yields a preferred model structure and its initial parameter estimates. Alternatively, the method can give a reduced set of likely models. Using simulated data we study how the selection method performs when there is both uncorrelated and correlated noise in the measurements. The first case is related to instrumentation noise, whereas the second case is related to spurious wave-induced motion often present during sea trials. We then consider the model structure selection of a modern high-speed trimaran ferry from full scale trial data.
Resumo:
Jackson (2005) developed a hybrid model of personality and learning, known as the learning styles profiler (LSP) which was designed to span biological, socio-cognitive, and experiential research foci of personality and learning research. The hybrid model argues that functional and dysfunctional learning outcomes can be best understood in terms of how cognitions and experiences control, discipline, and re-express the biologically based scale of sensation-seeking. In two studies with part-time workers undertaking tertiary education (N equals 137 and 58), established models of approach and avoidance from each of the three different research foci were compared with Jackson's hybrid model in their predictiveness of leadership, work, and university outcomes using self-report and supervisor ratings. Results showed that the hybrid model was generally optimal and, as hypothesized, that goal orientation was a mediator of sensation-seeking on outcomes (work performance, university performance, leader behaviours, and counterproductive work behaviour). Our studies suggest that the hybrid model has considerable promise as a predictor of work and educational outcomes as well as dysfunctional outcomes.
Resumo:
Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an ffective input for travel time prediction. In this paper, the hazard based prediction odels are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS) for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.
Resumo:
Commodity price modeling is normally approached in terms of structural time-series models, in which the different components (states) have a financial interpretation. The parameters of these models can be estimated using maximum likelihood. This approach results in a non-linear parameter estimation problem and thus a key issue is how to obtain reliable initial estimates. In this paper, we focus on the initial parameter estimation problem for the Schwartz-Smith two-factor model commonly used in asset valuation. We propose the use of a two-step method. The first step considers a univariate model based only on the spot price and uses a transfer function model to obtain initial estimates of the fundamental parameters. The second step uses the estimates obtained in the first step to initialize a re-parameterized state-space-innovations based estimator, which includes information related to future prices. The second step refines the estimates obtained in the first step and also gives estimates of the remaining parameters in the model. This paper is part tutorial in nature and gives an introduction to aspects of commodity price modeling and the associated parameter estimation problem.
Resumo:
The Construction industry accounts for a tenth of global GDP. Still, challenges such as slow adoption of new work processes, islands of information, and legal disputes, remain frequent, industry-wide occurrences despite various attempts to address them. In response, IT-based approaches have been adopted to explore collaborative ways of executing construction projects. Building Information Modelling (BIM) is an exemplar of integrative technologies whose 3D-visualisation capabilities have fostered collaboration especially between clients and design teams. Yet, the ways in which specification documents are created and used in capturing clients' expectations based on industry standards have remained largely unchanged since the 18th century. As a result, specification-related errors are still common place in an industry where vast amounts of information are consumed as well as produced in the course project implementation in the built environment. By implication, processes such as cost planning which depend on specification-related information remain largely inaccurate even with the use of BIM-based technologies. This paper briefly distinguishes between non-BIM-based and BIM-based specifications and reports on-going efforts geared towards the latter. We review exemplars aimed at extending Building Information Models to specification information embedded within the objects in a product library and explore a viable way of reasoning about a semi-automated process of specification using our product library.
Resumo:
The articles collected here in this special edition Epithelial-Mesenchymal (EMT) and Mesenchymal-Epithelial Transitions (MET) in Cancer provide a snapshot of the very rapidly progressing cinemascope of the involvement of these transitions in carcinoma progression. Pubmed analysis of EMT and cancer shows an exponential increase in the last few years in the number of papers and reviews published under these terms (Fig. 1). The last few years have seen these articles appearing in high calibre journals including Nature, Nature Cell Biology, Cancer Cell, PNAS, JNCI, JCI, and Cell, signaling the acceptance and quality of work in this field.
Resumo:
We investigated the effects of the matrix metalloproteinase 13 (MMP13)-selective inhibitor, 5-(4-{4-[4-(4-fluorophenyl)-1,3-oxazol-2-yl]phenoxy}phenoxy)-5-(2-methoxyethyl) pyrimidine-2,4,6(1H,3H,5H)-trione (Cmpd-1), on the primary tumor growth and breast cancer-associated bone remodeling using xenograft and syngeneic mouse models. We used human breast cancer MDA-MB-231 cells inoculated into the mammary fat pad and left ventricle of BALB/c Nu/Nu mice, respectively, and spontaneously metastasizing 4T1.2-Luc mouse mammary cells inoculated into mammary fat pad of BALB/c mice. In a prevention setting, treatment with Cmpd-1 markedly delayed the growth of primary tumors in both models, and reduced the onset and severity of osteolytic lesions in the MDA-MB-231 intracardiac model. Intervention treatment with Cmpd-1 on established MDA-MB-231 primary tumors also significantly inhibited subsequent growth. In contrast, no effects of Cmpd-1 were observed on soft organ metastatic burden following intracardiac or mammary fat pad inoculations of MDA-MB-231 and 4T1.2-Luc cells respectively. MMP13 immunostaining of clinical primary breast tumors and experimental mice tumors revealed intra-tumoral and stromal expression in most tumors, and vasculature expression in all. MMP13 was also detected in osteoblasts in clinical samples of breast-to-bone metastases. The data suggest that MMP13-selective inhibitors, which lack musculoskeletal side effects, may have therapeutic potential both in primary breast cancer and cancer-induced bone osteolysis.
Resumo:
We construct a two-scale mathematical model for modern, high-rate LiFePO4cathodes. We attempt to validate against experimental data using two forms of the phase-field model developed recently to represent the concentration of Li+ in nano-sized LiFePO4crystals. We also compare this with the shrinking-core based model we developed previously. Validating against high-rate experimental data, in which electronic and electrolytic resistances have been reduced is an excellent test of the validity of the crystal-scale model used to represent the phase-change that may occur in LiFePO4material. We obtain poor fits with the shrinking-core based model, even with fitting based on “effective” parameter values. Surprisingly, using the more sophisticated phase-field models on the crystal-scale results in poorer fits, though a significant parameter regime could not be investigated due to numerical difficulties. Separate to the fits obtained, using phase-field based models embedded in a two-scale cathodic model results in “many-particle” effects consistent with those reported recently.
Resumo:
In this paper, we present fully Bayesian experimental designs for nonlinear mixed effects models, in which we develop simulation-based optimal design methods to search over both continuous and discrete design spaces. Although Bayesian inference has commonly been performed on nonlinear mixed effects models, there is a lack of research into performing Bayesian optimal design for nonlinear mixed effects models that require searches to be performed over several design variables. This is likely due to the fact that it is much more computationally intensive to perform optimal experimental design for nonlinear mixed effects models than it is to perform inference in the Bayesian framework. In this paper, the design problem is to determine the optimal number of subjects and samples per subject, as well as the (near) optimal urine sampling times for a population pharmacokinetic study in horses, so that the population pharmacokinetic parameters can be precisely estimated, subject to cost constraints. The optimal sampling strategies, in terms of the number of subjects and the number of samples per subject, were found to be substantially different between the examples considered in this work, which highlights the fact that the designs are rather problem-dependent and require optimisation using the methods presented in this paper.
Resumo:
Most of existing motorway traffic safety studies using disaggregate traffic flow data aim at developing models for identifying real-time traffic risks by comparing pre-crash and non-crash conditions. One of serious shortcomings in those studies is that non-crash conditions are arbitrarily selected and hence, not representative, i.e. selected non-crash data might not be the right data comparable with pre-crash data; the non-crash/pre-crash ratio is arbitrarily decided and neglects the abundance of non-crash over pre-crash conditions; etc. Here, we present a methodology for developing a real-time MotorwaY Traffic Risk Identification Model (MyTRIM) using individual vehicle data, meteorological data, and crash data. Non-crash data are clustered into groups called traffic regimes. Thereafter, pre-crash data are classified into regimes to match with relevant non-crash data. Among totally eight traffic regimes obtained, four highly risky regimes were identified; three regime-based Risk Identification Models (RIM) with sufficient pre-crash data were developed. MyTRIM memorizes the latest risk evolution identified by RIM to predict near future risks. Traffic practitioners can decide MyTRIM’s memory size based on the trade-off between detection and false alarm rates. Decreasing the memory size from 5 to 1 precipitates the increase of detection rate from 65.0% to 100.0% and of false alarm rate from 0.21% to 3.68%. Moreover, critical factors in differentiating pre-crash and non-crash conditions are recognized and usable for developing preventive measures. MyTRIM can be used by practitioners in real-time as an independent tool to make online decision or integrated with existing traffic management systems.