186 resultados para approximate
Resumo:
Many complex aeronautical design problems can be formulated with efficient multi-objective evolutionary optimization methods and game strategies. This book describes the role of advanced innovative evolution tools in the solution, or the set of solutions of single or multi disciplinary optimization. These tools use the concept of multi-population, asynchronous parallelization and hierarchical topology which allows different models including precise, intermediate and approximate models with each node belonging to the different hierarchical layer handled by a different Evolutionary Algorithm. The efficiency of evolutionary algorithms for both single and multi-objective optimization problems are significantly improved by the coupling of EAs with games and in particular by a new dynamic methodology named “Hybridized Nash-Pareto games”. Multi objective Optimization techniques and robust design problems taking into account uncertainties are introduced and explained in detail. Several applications dealing with civil aircraft and UAV, UCAV systems are implemented numerically and discussed. Applications of increasing optimization complexity are presented as well as two hands-on test cases problems. These examples focus on aeronautical applications and will be useful to the practitioner in the laboratory or in industrial design environments. The evolutionary methods coupled with games presented in this volume can be applied to other areas including surface and marine transport, structures, biomedical engineering, renewable energy and environmental problems.
Resumo:
Raman and infrared spectra of three well-defined turquoise samples, CuAl6(PO4)4(OH)8·4H2O, from Lavender Pit, Bisbee, Cochise county, Arizona; Kouroudaiko mine, Faleme river, Senegal and Lynch Station, Virginia were studied, interpreted and compared. Observed Raman and infrared bands were assigned to the stretching and bending vibrations of phosphate tetrahedra, water molecules and hydroxyl ions. Approximate O–H⋯O hydrogen bond lengths were inferred from the Raman and infrared spectra. No Raman and infrared bands attributable to the stretching and bending vibrations of (PO3OH)2− units were observed.
Resumo:
This thesis introduces a new way of using prior information in a spatial model and develops scalable algorithms for fitting this model to large imaging datasets. These methods are employed for image-guided radiation therapy and satellite based classification of land use and water quality. This study has utilized a pre-computation step to achieve a hundredfold improvement in the elapsed runtime for model fitting. This makes it much more feasible to apply these models to real-world problems, and enables full Bayesian inference for images with a million or more pixels.
Resumo:
We and others have published on the rapid manufacture of micropellet tissues, typically formed from 100-500 cells each. The micropellet geometry enhances cellular biological properties, and in many cases the micropellets can subsequently be utilized as building blocks to assemble complex macrotissues. Generally, micropellets are formed from cells alone, however when replicating matrix-rich tissues such as cartilage it would be ideal if matrix or biomaterials supplements could be incorporated directly into the micropellet during the manufacturing process. Herein we describe a method to efficiently incorporate donor cartilage matrix into tissue engineered cartilage micropellets. We lyophilized bovine cartilage matrix, and then shattered it into microscopic pieces having average dimensions < 10 μm diameter; we termed this microscopic donor matrix "cartilage dust (CD)". Using a microwell platform, we show that ~0.83 μg CD can be rapidly and efficiently incorporated into single multicellular aggregates formed from 180 bone marrow mesenchymal stem/stromal cells (MSC) each. The microwell platform enabled the rapid manufacture of thousands of replica composite micropellets, with each micropellet having a material/CD core and a cellular surface. This micropellet organization enabled the rapid bulking up of the micropellet core matrix content, and left an adhesive cellular outer surface. This morphological organization enabled the ready assembly of the composite micropellets into macroscopic tissues. Generically, this is a versatile method that enables the rapid and uniform integration of biomaterials into multicellular micropellets that can then be used as tissue building blocks. In this study, the addition of CD resulted in an approximate 8-fold volume increase in the micropellets, with the donor matrix functioning to contribute to an increase in total cartilage matrix content. Composite micropellets were readily assembled into macroscopic cartilage tissues; the incorporation of CD enhanced tissue size and matrix content, but did not enhance chondrogenic gene expression.
Resumo:
Spoken term detection (STD) is the task of looking up a spoken term in a large volume of speech segments. In order to provide fast search, speech segments are first indexed into an intermediate representation using speech recognition engines which provide multiple hypotheses for each speech segment. Approximate matching techniques are usually applied at the search stage to compensate the poor performance of automatic speech recognition engines during indexing. Recently, using visual information in addition to audio information has been shown to improve phone recognition performance, particularly in noisy environments. In this paper, we will make use of visual information in the form of lip movements of the speaker in indexing stage and will investigate its effect on STD performance. Particularly, we will investigate if gains in phone recognition accuracy will carry through the approximate matching stage to provide similar gains in the final audio-visual STD system over a traditional audio only approach. We will also investigate the effect of using visual information on STD performance in different noise environments.
Resumo:
Light gauge Steel Frame (LSF) walls are extensively used in the building industry due to the many advantages they provide over other wall systems. Although LSF walls have been used widely, fire design of LSF walls is based on approximate prescriptive methods based on limited fire tests. Also these fire tests were conducted using the standard fire curve [1] and the applicability of available design rules to realistic design fire curves has not been verified. This paper investigates the accuracy of existing fire design rules in the current cold-formed steel standards and the modifications proposed by previous researchers. Of these the recently developed design rules by Gunalan and Mahendran [2] based on Eurocode 3 Part 1.3 [3] and AS/NZS 4600 [4] for standard fire exposure [1] were investigated in detail to determine their applicability to predict the axial compression strengths and fire resistance ratings of LSF walls exposed to realistic design fire curves. This paper also presents the fire performance results of LSF walls exposed to a range of realistic fire curves obtained using a finite element analysis based parametric study. The results from the parametric study were used to develop a simplified design method based on the critical hot flange temperature to predict the fire resistance ratings of LSF walls exposed to realistic fire curves. Finally, the stud failure times (fire resistance rating) obtained from the fire design rules and the simplified design method were compared with parametric study results for LSF walls lined with single and double plasterboards, and externally insulated with rock fibres under realistic fire curves.
Resumo:
In this paper it is demonstrated how the Bayesian parametric bootstrap can be adapted to models with intractable likelihoods. The approach is most appealing when the semi-automatic approximate Bayesian computation (ABC) summary statistics are selected. After a pilot run of ABC, the likelihood-free parametric bootstrap approach requires very few model simulations to produce an approximate posterior, which can be a useful approximation in its own right. An alternative is to use this approximation as a proposal distribution in ABC algorithms to make them more efficient. In this paper, the parametric bootstrap approximation is used to form the initial importance distribution for the sequential Monte Carlo and the ABC importance and rejection sampling algorithms. The new approach is illustrated through a simulation study of the univariate g-and- k quantile distribution, and is used to infer parameter values of a stochastic model describing expanding melanoma cell colonies.
Resumo:
The inverse temperature hyperparameter of the hidden Potts model governs the strength of spatial cohesion and therefore has a substantial influence over the resulting model fit. The difficulty arises from the dependence of an intractable normalising constant on the value of the inverse temperature, thus there is no closed form solution for sampling from the distribution directly. We review three computational approaches for addressing this issue, namely pseudolikelihood, path sampling, and the approximate exchange algorithm. We compare the accuracy and scalability of these methods using a simulation study.
Resumo:
Due to the existing of many prestressed members in the structural system, the interdependent behavior of all prestressed members is the main concern in the analysis of the pretension process. A thorough investigation of this mutual effect is essential for an effective, reliable, and optimal analysis. Focus on this aspect, this paper presents an investigation of the interdependent behavior of all prestressed members in the whole structural system based on influence matrix (IFM). Four different types of IFM are introduced. Two different solving methods are brought forth to analyze the pretension process. The direct solving method solves for the accurate solution, whereas the iterative solving method repeatedly amends to achieve an approximate solution. A numerical example is then conducted. The result shows that various kinds of complicated batched and repeated tensioning schemes can be analyzed reliably, effectively, and completely based on IFM.
Resumo:
The relatively high incidence of Merkel cell carcinoma (MCC) in Queensland provides a valuable opportunity to examine links with other cancers. A retrospective cohort study was performed using data from the Queensland Cancer Registry. Standardized incidence ratios (SIRs) were used to approximate the relative risk of being diagnosed with another primary cancer either following or prior to MCC. Patients with an eligible first primary MCC (n=787) had more than double the expected number of subsequent primary cancers (SIR=2.19, 95% confidence interval (CI)=1.84–2.60; P<0.001). Conversely, people who were initially diagnosed with cancers other than MCC were about two and a half times more likely to have a subsequent primary MCC (n=244) compared with the general population (SIR=2.69, 95% CI=2.36–3.05; P<0.001). Significantly increased bi-directional relative risks were found for melanoma, lip cancer, head and neck cancer, lung cancer, myelodysplastic diseases, and cancer with unknown primary site. In addition, risks were elevated for female breast cancer and kidney cancer following a first primary MCC, and for subsequent MCCs following first primary colorectal cancer, prostate cancer, non-Hodgkin lymphoma, or lymphoid leukemia. These results suggest that several shared pathways are likely for MCC and other cancers, including immunosuppression, UV radiation, and genetics.
Resumo:
Molecular phylogenetic studies of homologous sequences of nucleotides often assume that the underlying evolutionary process was globally stationary, reversible, and homogeneous (SRH), and that a model of evolution with one or more site-specific and time-reversible rate matrices (e.g., the GTR rate matrix) is enough to accurately model the evolution of data over the whole tree. However, an increasing body of data suggests that evolution under these conditions is an exception, rather than the norm. To address this issue, several non-SRH models of molecular evolution have been proposed, but they either ignore heterogeneity in the substitution process across sites (HAS) or assume it can be modeled accurately using the distribution. As an alternative to these models of evolution, we introduce a family of mixture models that approximate HAS without the assumption of an underlying predefined statistical distribution. This family of mixture models is combined with non-SRH models of evolution that account for heterogeneity in the substitution process across lineages (HAL). We also present two algorithms for searching model space and identifying an optimal model of evolution that is less likely to over- or underparameterize the data. The performance of the two new algorithms was evaluated using alignments of nucleotides with 10 000 sites simulated under complex non-SRH conditions on a 25-tipped tree. The algorithms were found to be very successful, identifying the correct HAL model with a 75% success rate (the average success rate for assigning rate matrices to the tree's 48 edges was 99.25%) and, for the correct HAL model, identifying the correct HAS model with a 98% success rate. Finally, parameter estimates obtained under the correct HAL-HAS model were found to be accurate and precise. The merits of our new algorithms were illustrated with an analysis of 42 337 second codon sites extracted from a concatenation of 106 alignments of orthologous genes encoded by the nuclear genomes of Saccharomyces cerevisiae, S. paradoxus, S. mikatae, S. kudriavzevii, S. castellii, S. kluyveri, S. bayanus, and Candida albicans. Our results show that second codon sites in the ancestral genome of these species contained 49.1% invariable sites, 39.6% variable sites belonging to one rate category (V1), and 11.3% variable sites belonging to a second rate category (V2). The ancestral nucleotide content was found to differ markedly across these three sets of sites, and the evolutionary processes operating at the variable sites were found to be non-SRH and best modeled by a combination of eight edge-specific rate matrices (four for V1 and four for V2). The number of substitutions per site at the variable sites also differed markedly, with sites belonging to V1 evolving slower than those belonging to V2 along the lineages separating the seven species of Saccharomyces. Finally, sites belonging to V1 appeared to have ceased evolving along the lineages separating S. cerevisiae, S. paradoxus, S. mikatae, S. kudriavzevii, and S. bayanus, implying that they might have become so selectively constrained that they could be considered invariable sites in these species.
Resumo:
Here, we describe a metal-insulator-insulator nanofocusing structure formed by a high-permittivity dielectric wedge on a metal substrate. The structure is shown to produce nanofocusing of surface plasmon polaritons (SPPs) in the direction opposite to the taper of the wedge, including a range of nanoplasmonic effects such as nanofocusing of SPPs with negative refraction, formation of plasmonic caustics within a nanoscale distance from the wedge tip, mutual transformation of SPP modes, and significant local field enhancements in the adiabatic and strongly nonadiabatic regimes. A combination of approximate analytical and rigorous numerical approaches is used to analyze the strength and position of caustics in the structure. In particular, it is demonstrated that strong SPP localization within spatial regions as small as a few tens of nanometers near the caustic is achievable in the considered structures. Contrary to other nanofocusing configurations, efficient nanofocusing is shown to occur in the strongly nonadiabatic regime with taper angles of the dielectric wedge as large as ∼40° and within uniquely short distances (as small as a few dozens of nanometers) from the tip of the wedge. Physical interpretations of the obtained results are also presented and discussed.
Resumo:
Deriving an estimate of optimal fishing effort or even an approximate estimate is very valuable for managing fisheries with multiple target species. The most challenging task associated with this is allocating effort to individual species when only the total effort is recorded. Spatial information on the distribution of each species within a fishery can be used to justify the allocations, but often such information is not available. To determine the long-term overall effort required to achieve maximum sustainable yield (MSY) and maximum economic yield (MEY), we consider three methods for allocating effort: (i) optimal allocation, which optimally allocates effort among target species; (ii) fixed proportions, which chooses proportions based on past catch data; and (iii) economic allocation, which splits effort based on the expected catch value of each species. Determining the overall fishing effort required to achieve these management objectives is a maximizing problem subject to constraints due to economic and social considerations. We illustrated the approaches using a case study of the Moreton Bay Prawn Trawl Fishery in Queensland (Australia). The results were consistent across the three methods. Importantly, our analysis demonstrated the optimal total effort was very sensitive to daily fishing costs-the effort ranged from 9500-11 500 to 6000-7000, 4000 and 2500 boat-days, using daily cost estimates of $0, $500, $750, and $950, respectively. The zero daily cost corresponds to the MSY, while a daily cost of $750 most closely represents the actual present fishing cost. Given the recent debate on which costs should be factored into the analyses for deriving MEY, our findings highlight the importance of including an appropriate cost function for practical management advice. The approaches developed here could be applied to other multispecies fisheries where only aggregated fishing effort data are recorded, as the literature on this type of modelling is sparse.
Resumo:
The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.
Resumo:
Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.