963 resultados para Objective functions
Resumo:
In this work, a unified algorithm-architecture-circuit co-design environment for complex FPGA system development is presented. The main objective is to find an efficient methodology for designing a configurable optimized FPGA system by using as few efforts as possible in verification stage, so as to speed up the development period. A proposed high performance FFT/iFFT processor for Multiband Orthogonal Frequency Division Multiplexing Ultra Wideband (MB-OFDM UWB) system design process is given as an example to demonstrate the proposed methodology. This efficient design methodology is tested and considered to be suitable for almost all types of complex FPGA system designs and verifications.
Resumo:
Objective: To spatially and temporally characterise the cortical contrast response function to pattern onset stimuli in humans. Methods: Magnetoencephalography (MEG) was used to investigate the human cortical contrast response function to pattern onset stimuli with high temporal and spatial resolution. A beamformer source reconstruction approach was used to spatially localise and identify the time courses of activity at various visual cortical loci. Results: Consistent with the findings of previous studies, MEG beamformer analysis revealed two simultaneous generators of the pattern onset evoked response. These generators arose from anatomically discrete locations in striate and extra-striate visual cortex. Furthermore, these loci demonstrated notably distinct contrast response functions, with striate cortex increasing approximately linearly with contrast, whilst extra-striate visual cortex followed a saturating function. Conclusions: The generators that underlie the pattern onset visual evoked response arise from two distinct regions in striate and extra-striate visual cortex. Significance: The spatially, temporally and functionally distinct mechanisms of contrast processing within the visual cortex may account for the disparate results observed across earlier studies and assist in elucidating causal mechanisms of aberrant contrast processing in neurological disorders. © 2005 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Resumo:
The starting point of this research was the belief that manufacturing and similar industries need help with the concept of e-business, especially in assessing the relevance of possible e-business initiatives. The research hypotheses was that it should be possible to produce a systematic model that defines, at a useful level of detail, the probable e-business requirements of an organisation based on objective criteria with an accuracy of 85%-90%. This thesis describes the development and validation of such a model. A preliminary model was developed from a variety of sources, including a survey of current and planned e-business activity and representative examples of e-business material produced by e-business solution providers. The model was subject to a process of testing and refinement based on recursive case studies, with controls over the improving accuracy and stability of the model. Useful conclusions were also possible as to the relevance of e-business functions to the case study participants themselves. Techniques were evolved to synthesise the e-business requirements of an organisation and present them at a management summary level of detail. The results of applying these techniques to all the case studies used in this research were discussed. The conclusion of the research was that the case study methodology employed was successful. A model was achieved suitable for practical application in a manufacturing organisation requiring help with a requirements definition process.
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
Objective: Caffeine has been shown to have effects on certain areas of cognition, but in executive functioning the research is limited and also inconsistent. One reason could be the need for a more sensitive measure to detect the effects of caffeine on executive function. This study used a new non-immersive virtual reality assessment of executive functions known as JEF© (the Jansari Assessment of Executive Function) alongside the ‘classic’ Stroop Colour- Word task to assess the effects of a normal dose of caffeinated coffee on executive function. Method: Using a double-blind, counterbalanced within participants procedure 43 participants were administered either a caffeinated or decaffeinated coffee and completed the ‘JEF©’ and Stroop tasks, as well as a subjective mood scale and blood pressure pre- and post condition on two separate occasions a week apart. JEF© yields measures for eight separate aspects of executive functions, in addition to a total average score. Results: Findings indicate that performance was significantly improved on the planning, creative thinking, event-, time- and action-based prospective memory, as well as total JEF© score following caffeinated coffee relative to the decaffeinated coffee. The caffeinated beverage significantly decreased reaction times on the Stroop task, but there was no effect on Stroop interference. Conclusion: The results provide further support for the effects of a caffeinated beverage on cognitive functioning. In particular, it has demonstrated the ability of JEF© to detect the effects of caffeine across a number of executive functioning constructs, which weren’t shown in the Stroop task, suggesting executive functioning improvements as a result of a ‘typical’ dose of caffeine may only be detected by the use of more real-world, ecologically valid tasks.
Resumo:
Remote sensing is a promising approach for above ground biomass estimation, as forest parameters can be obtained indirectly. The analysis in space and time is quite straight forward due to the flexibility of the method to determine forest crown parameters with remote sensing. It can be used to evaluate and monitoring for example the development of a forest area in time and the impact of disturbances, such as silvicultural practices or deforestation. The vegetation indices, which condense data in a quantitative numeric manner, have been used to estimate several forest parameters, such as the volume, basal area and above ground biomass. The objective of this study was the development of allometric functions to estimate above ground biomass using vegetation indices as independent variables. The vegetation indices used were the Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), Simple Ratio (SR) and Soil-Adjusted Vegetation Index (SAVI). QuickBird satellite data, with 0.70 m of spatial resolution, was orthorectified, geometrically and atmospheric corrected, and the digital number were converted to top of atmosphere reflectance (ToA). Forest inventory data and published allometric functions at tree level were used to estimate above ground biomass per plot. Linear functions were fitted for the monospecies and multispecies stands of two evergreen oaks (Quercus suber and Quercus rotundifolia) in multiple use systems, montados. The allometric above ground biomass functions were fitted considering the mean and the median of each vegetation index per grid as independent variable. Species composition as a dummy variable was also considered as an independent variable. The linear functions with better performance are those with mean NDVI or mean SR as independent variable. Noteworthy is that the two better functions for monospecies cork oak stands have median NDVI or median SR as independent variable. When species composition dummy variables are included in the function (with stepwise regression) the best model has median NDVI as independent variable. The vegetation indices with the worse model performance were EVI and SAVI.
Resumo:
The tissue kallikreins are serine proteases encoded by highly conserved multigene families. The rodent kallikrein (KLK) families are particularly large, consisting of 13 26 genes clustered in one chromosomal locus. It has been recently recognised that the human KLK gene family is of a similar size (15 genes) with the identification of another 12 related genes (KLK4-KLK15) within and adjacent to the original human KLK locus (KLK1-3) on chromosome 19q13.4. The structural organisation and size of these new genes is similar to that of other KLK genes except for additional exons encoding 5 or 3 untranslated regions. Moreover, many of these genes have multiple mRNA transcripts, a trait not observed with rodent genes. Unlike all other kallikreins, the KLK4-KLK15 encoded proteases are less related (25–44%) and do not contain a conventional kallikrein loop. Clusters of genes exhibit high prostatic (KLK2-4, KLK15) or pancreatic (KLK6-13) expression, suggesting evolutionary conservation of elements conferring tissue specificity. These genes are also expressed, to varying degrees, in a wider range of tissues suggesting a functional involvement of these newer human kallikrein proteases in a diverse range of physiological processes.