948 resultados para diffusive viscoelastic model, global weak solution, error estimate
Resumo:
This thesis develops an approach to the construction of multidimensional stochastic models for intelligent systems exploring an underwater environment. It describes methods for building models by a three- dimensional spatial decomposition of stochastic, multisensor feature vectors. New sensor information is incrementally incorporated into the model by stochastic backprojection. Error and ambiguity are explicitly accounted for by blurring a spatial projection of remote sensor data before incorporation. The stochastic models can be used to derive surface maps or other representations of the environment. The methods are demonstrated on data sets from multibeam bathymetric surveying, towed sidescan bathymetry, towed sidescan acoustic imagery, and high-resolution scanning sonar aboard a remotely operated vehicle.
Resumo:
This paper describes a new statistical, model-based approach to building a contact state observer. The observer uses measurements of the contact force and position, and prior information about the task encoded in a graph, to determine the current location of the robot in the task configuration space. Each node represents what the measurements will look like in a small region of configuration space by storing a predictive, statistical, measurement model. This approach assumes that the measurements are statistically block independent conditioned on knowledge of the model, which is a fairly good model of the actual process. Arcs in the graph represent possible transitions between models. Beam Viterbi search is used to match measurement history against possible paths through the model graph in order to estimate the most likely path for the robot. The resulting approach provides a new decision process that can be use as an observer for event driven manipulation programming. The decision procedure is significantly more robust than simple threshold decisions because the measurement history is used to make decisions. The approach can be used to enhance the capabilities of autonomous assembly machines and in quality control applications.
Resumo:
The memory hierarchy is the main bottleneck in modern computer systems as the gap between the speed of the processor and the memory continues to grow larger. The situation in embedded systems is even worse. The memory hierarchy consumes a large amount of chip area and energy, which are precious resources in embedded systems. Moreover, embedded systems have multiple design objectives such as performance, energy consumption, and area, etc. Customizing the memory hierarchy for specific applications is a very important way to take full advantage of limited resources to maximize the performance. However, the traditional custom memory hierarchy design methodologies are phase-ordered. They separate the application optimization from the memory hierarchy architecture design, which tend to result in local-optimal solutions. In traditional Hardware-Software co-design methodologies, much of the work has focused on utilizing reconfigurable logic to partition the computation. However, utilizing reconfigurable logic to perform the memory hierarchy design is seldom addressed. In this paper, we propose a new framework for designing memory hierarchy for embedded systems. The framework will take advantage of the flexible reconfigurable logic to customize the memory hierarchy for specific applications. It combines the application optimization and memory hierarchy design together to obtain a global-optimal solution. Using the framework, we performed a case study to design a new software-controlled instruction memory that showed promising potential.
Resumo:
The application of Discriminant function analysis (DFA) is not a new idea in the study of tephrochrology. In this paper, DFA is applied to compositional datasets of two different types of tephras from Mountain Ruapehu in New Zealand and Mountain Rainier in USA. The canonical variables from the analysis are further investigated with a statistical methodology of change-point problems in order to gain a better understanding of the change in compositional pattern over time. Finally, a special case of segmented regression has been proposed to model both the time of change and the change in pattern. This model can be used to estimate the age for the unknown tephras using Bayesian statistical calibration
Resumo:
Este estudio de caso busca analizar la Política Exterior Japonesa en materia económica frente a las dinámicas comerciales tanto de Corea del Sur como del mismo Japón en el Periodo 2001 – 2011, teniendo como objetivo concreto la identificación de la incidencia que tienen dichas dinámicas comerciales, en términos de competitividad por precio y calidad, sobre la Política Exterior Comercial Japonesa (PECJ).
Resumo:
Early detection of breast cancer (BC) with mammography may cause overdiagnosis and overtreatment, detecting tumors which would remain undiagnosed during a lifetime. The aims of this study were: first, to model invasive BC incidence trends in Catalonia (Spain) taking into account reproductive and screening data; and second, to quantify the extent of BC overdiagnosis. We modeled the incidence of invasive BC using a Poisson regression model. Explanatory variables were: age at diagnosis and cohort characteristics (completed fertility rate, percentage of women that use mammography at age 50, and year of birth). This model also was used to estimate the background incidence in the absence of screening. We used a probabilistic model to estimate the expected BC incidence if women in the population used mammography as reported in health surveys. The difference between the observed and expected cumulative incidences provided an estimate of overdiagnosis.Incidence of invasive BC increased, especially in cohorts born from 1940 to 1955. The biggest increase was observed in these cohorts between the ages of 50 to 65 years, where the final BC incidence rates more than doubled the initial ones. Dissemination of mammography was significantly associated with BC incidence and overdiagnosis. Our estimates of overdiagnosis ranged from 0.4% to 46.6%, for women born around 1935 and 1950, respectively.Our results support the existence of overdiagnosis in Catalonia attributed to mammography usage, and the limited malignant potential of some tumors may play an important role. Women should be better informed about this risk. Research should be oriented towards personalized screening and risk assessment tools
Resumo:
In this paper we consider the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data, a problem which models, for example, outdoor sound propagation over inhomogeneous. at terrain. To achieve good approximation at high frequencies with a relatively low number of degrees of freedom, we propose a novel Galerkin boundary element method, using a graded mesh with smaller elements adjacent to discontinuities in impedance and a special set of basis functions so that, on each element, the approximation space contains polynomials ( of degree.) multiplied by traces of plane waves on the boundary. We prove stability and convergence and show that the error in computing the total acoustic field is O( N-(v+1) log(1/2) N), where the number of degrees of freedom is proportional to N logN. This error estimate is independent of the wavenumber, and thus the number of degrees of freedom required to achieve a prescribed level of accuracy does not increase as the wavenumber tends to infinity.
Resumo:
A wind catcher/tower natural ventilation system was installed in a seminar room in the building of the School of Construction Management and Engineering, the University of Reading in the UK . Performance was analysed by means of ventilation tracer gas measurements, indoor climate measurements (temperature, humidity, CO2) and occupant surveys. In addition, the potential of simple design tools was evaluated by comparing observed ventilation results with those predicted by an explicit ventilation model and the AIDA implicit ventilation model. To support this analysis, external climate parameters (wind speed and direction, solar radiation, external temperature and humidity) were also monitored. The results showed the chosen ventilation design provided a substantially greater ventilation rate than an equivalent area of openable window. Also air quality parameters stayed within accepted norms while occupants expressed general satisfaction with the system and with comfort conditions. Night cooling was maximised by using the system in combination with openable windows. Comparisons of calculations with ventilation rate measurements showed that while AIDA gave reasonably correlated results with the monitored performance results, the widely used industry explicit model was found to over estimate the monitored ventilation rate.
Resumo:
A neural network enhanced proportional, integral and derivative (PID) controller is presented that combines the attributes of neural network learning with a generalized minimum-variance self-tuning control (STC) strategy. The neuro PID controller is structured with plant model identification and PID parameter tuning. The plants to be controlled are approximated by an equivalent model composed of a simple linear submodel to approximate plant dynamics around operating points, plus an error agent to accommodate the errors induced by linear submodel inaccuracy due to non-linearities and other complexities. A generalized recursive least-squares algorithm is used to identify the linear submodel, and a layered neural network is used to detect the error agent in which the weights are updated on the basis of the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model, and therefore the error agent is naturally functioned within the control law. In this way the controller can deal not only with a wide range of linear dynamic plants but also with those complex plants characterized by severe non-linearity, uncertainties and non-minimum phase behaviours. Two simulation studies are provided to demonstrate the effectiveness of the controller design procedure.
Resumo:
Three novel mixed bridged trinuclear and one tetranuclear copper(II) complexes of tridentate NNO donor Schiff base ligands [Cu-3(L-1)(2)(mu(LI)-N-3)(2)(CH3OH)(2)(BF2)(2)] (1), [Cu-3(L-1)(2)(mu(LI)-NO3-I kappa O.2 kappa O')(2)] (2), [Cu-3(L-2)(2)(mu(LI)-N-3)(2)(mu-NOI-I kappa O 2 kappa O')(2)] (3) and [Cu-4(L-3)(2)(mu(LI)-N-3)(4)(mu-CH3COO-I kappa O 2 kappa O')(2)] (4) have been synthesized by reaction of the respective tridentate ligands (L-1 = 2[1-(2-dimethylamino-ethylimino)-ethyl]-phenol, L-2 = 2[1-(2-diethylamino-ethylimino)-ethyl]-phenol, L-3 = 2-[1-(2-dimethylamino-ethylimino)-methyl]-phenol) with the corresponding copper(II) salts in the presence of NaN3 The complexes are characterized by single-crystal X-ray diffraction analyses and variable-temperature magnetic measurements Complex 1 is composed of two terminal [Cu(L-1)(mu(LI)-N-3)] units connected by a central [Cu(BF4)(2)] unit through nitrogen atoms of end-on azido ligands and a phenoxo oxygen atom of the tridentate ligand The structures of 2 and 3 are very similar, the only difference is that the central unit is [Cu(NO1)(2)] and the nitrate group forms an additional mu-NO3-I kappa O 2 kappa O' bridge between the terminal and central copper atoms In complex 4, the central unit is a di-mu(L1)-N-3 bridged dicopper entity, [Cu-2(mu(L1)-N-3)(2)(CH3COO)(2)] that connects two terminal [Cu(L-3)(mu(L1)-N-3)] units through end-on azido; phenoxo oxygen and mu-CH3COO-1 kappa O center dot 2 kappa O' triple bridges to result in a tetranuclear unit Analyses of variable-temperature magnetic susceptibility data indicates that there is a global weak antiferromagnetic interaction between the copper(II) ions in complexes 1-3, with the exchange parameter J of -9 86, -11 6 and -19 98 cm(-1) for 1-3, respectively In complex 4 theoretical calculations show the presence of an antiferromagnetic coupling in the triple bridging ligands (acetato, phenoxo and azido) while the interaction through the double end-on azido bridging ligand is strongly ferromagnetic.
Resumo:
Four new nickel(II) complexes, [Ni2L2(NO2)2]·CH2Cl2·C2H5OH, 2H2O (1), [Ni2L2(DMF)2(m-NO2)]ClO4·DMF (2a), [Ni2L2(DMF)2(m-NO2)]ClO4 (2b) and [Ni3L¢2(m3-NO2)2(CH2Cl2)]n·1.5H2O (3) where HL = 2-[(3-amino-propylimino)-methyl]-phenol, H2L¢ = 2-({3-[(2-hydroxy-benzylidene)-amino]-propylimino}-methyl)-phenol and DMF = N,N-dimethylformamide, have been synthesized starting with the precursor complex [NiL2]·2H2O, nickel(II) perchlorate and sodium nitrite and characterized structurally and magnetically. The structural analyses reveal that in all the complexes, NiII ions possess a distorted octahedral geometry. Complex 1 is a dinuclear di-m2-phenoxo bridged species in which nitrite ion acts as chelating co-ligand. Complexes 2a and 2b also consist of dinuclear entities, but in these two compounds a cis-(m-nitrito-1kO:2kN) bridge is present in addition to the di-m2-phenoxo bridge. The molecular structures of 2a and 2b are equivalent; they differ only in that 2a contains an additional solvated DMF molecule. Complex 3 is formed by ligand rearrangement and is a one-dimensional polymer in which double phenoxo as well as m-nitrito-1kO:2kN bridged trinuclear units are linked through a very rare m3-nitrito-1kO:2kN:3kO¢ bridge. Analysis of variable-temperature magnetic susceptibility data indicates that there is a global weak antiferromagnetic interaction between the nickel(II) ions in four complexes, with exchange parameters J of -5.26, -11.45, -10.66 and -5.99 cm-1 for 1, 2a, 2b and 3, respectively
Resumo:
Motivated by accounts of concept use in autistic spectrum disorder (ASD), and a computational model of weak central coherence (O’Loughlin & Thagard, 2000) we examined comprehension and production vocabulary in typically-developing children, and those with ASD and Down syndrome (DS). Controlling for frequency, familiarity, length, and imageability, Colorado Meaningfulness played a hitherto unremarked role in the vocabularies of children with ASD. High Colorado Meaningful words were underrepresented in the comprehension vocabularies of 2- to 12-year-olds with ASD. The Colorado Meaningfulness of a word is a measure of how many words can be associated with it. Situations in which high Colorado Meaningfulness words are encountered are typically highly variable, and words with High Colorado Meaningfulness often involve extensive use of context. Our data suggest that the number of contexts in which a particular word can appear has a role in determining vocabulary in ASD. This suggestion is consistent with the weak central coherence theory of autism.
Resumo:
In public goods experiments, stochastic choice, censoring and motivational heterogeneity give scope for disagreement over the extent of unselfishness, and whether it is reciprocal or altruistic. We show that these problems can be addressed econometrically, by estimating a finite mixture model to isolate types, incorporating double censoring and a tremble term. Most subjects act selfishly, but a substantial proportion are reciprocal with altruism playing only a marginal role. Isolating reciprocators enables a test of Sugden’s model of voluntary contributions. We estimate that reciprocators display a self-serving bias relative to the model.
Resumo:
To optimise the placement of small wind turbines in urban areas a detailed understanding of the spatial variability of the wind resource is required. At present, due to a lack of observations, the NOABL wind speed database is frequently used to estimate the wind resource at a potential site. However, recent work has shown that this tends to overestimate the wind speed in urban areas. This paper suggests a method for adjusting the predictions of the NOABL in urban areas by considering the impact of the underlying surface on a neighbourhood scale. In which, the nature of the surface is characterised on a 1 km2 resolution using an urban morphology database. The model was then used to estimate the variability of the annual mean wind speed across Greater London at a height typical of current small wind turbine installations. Initial validation of the results suggests that the predicted wind speeds are considerably more accurate than the NOABL values. The derived wind map therefore currently provides the best opportunity to identify the neighbourhoods in Greater London at which small wind turbines yield their highest energy production. The model does not consider street scale processes, however previously derived scaling factors can be applied to relate the neighbourhood wind speed to a value at a specific rooftop site. The results showed that the wind speed predicted across London is relatively low, exceeding 4 ms-1 at only 27% of the neighbourhoods in the city. Of these sites less than 10% are within 10 km of the city centre, with the majority over 20 km from the city centre. Consequently, it is predicted that small wind turbines tend to perform better towards the outskirts of the city, therefore for cities which fit the Burgess concentric ring model, such as Greater London, ‘distance from city centre’ is a useful parameter for siting small wind turbines. However, there are a number of neighbourhoods close to the city centre at which the wind speed is relatively high and these sites can only been identified with a detailed representation of the urban surface, such as that developed in this study.
Resumo:
This paper examines the lead–lag relationship between the FTSE 100 index and index futures price employing a number of time series models. Using 10-min observations from June 1996–1997, it is found that lagged changes in the futures price can help to predict changes in the spot price. The best forecasting model is of the error correction type, allowing for the theoretical difference between spot and futures prices according to the cost of carry relationship. This predictive ability is in turn utilised to derive a trading strategy which is tested under real-world conditions to search for systematic profitable trading opportunities. It is revealed that although the model forecasts produce significantly higher returns than a passive benchmark, the model was unable to outperform the benchmark after allowing for transaction costs.