130 resultados para 3D problems
Resumo:
This paper deals with the goodness of the Gaussian assumption when designing second-order blind estimationmethods in the context of digital communications. The low- andhigh-signal-to-noise ratio (SNR) asymptotic performance of the maximum likelihood estimator—derived assuming Gaussiantransmitted symbols—is compared with the performance of the optimal second-order estimator, which exploits the actualdistribution of the discrete constellation. The asymptotic study concludes that the Gaussian assumption leads to the optimalsecond-order solution if the SNR is very low or if the symbols belong to a multilevel constellation such as quadrature-amplitudemodulation (QAM) or amplitude-phase-shift keying (APSK). On the other hand, the Gaussian assumption can yield importantlosses at high SNR if the transmitted symbols are drawn from a constant modulus constellation such as phase-shift keying (PSK)or continuous-phase modulations (CPM). These conclusions are illustrated for the problem of direction-of-arrival (DOA) estimation of multiple digitally-modulated signals.
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
The widespread implementation of GIS-based 3D topographical models has been a great aid in the development and testing of archaeological hypotheses. In this paper, a topographical reconstruction of the ancient city of Tarraco, the Roman capital of the Tarraconensis province, is presented. This model is based on topographical data obtained through archaeological excavations, old photographic documentation, georeferenced archive maps depicting the pre-modern city topography, modern detailed topographical maps and differential GPS measurements. The addition of the Roman urban architectural features to the model offers the possibility to test hypotheses concerning the ideological background manifested in the city shape. This is accomplished mainly through the use of 3D views from the main city accesses. These techniques ultimately demonstrate the ‘theatre-shaped’ layout of the city (to quote Vitrubius) as well as its southwest oriented architecture, whose monumental character was conceived to present a striking aspect to visitors, particularly those arriving from the sea.
Resumo:
Forecasting coal resources and reserves is critical for coal mine development. Thickness maps are commonly used for assessing coal resources and reserves; however they are limited for capturing coal splitting effects in thick and heterogeneous coal zones. As an alternative, three-dimensional geostatistical methods are used to populate facies distributionwithin a densely drilled heterogeneous coal zone in the As Pontes Basin (NWSpain). Coal distribution in this zone is mainly characterized by coal-dominated areas in the central parts of the basin interfingering with terrigenous-dominated alluvial fan zones at the margins. The three-dimensional models obtained are applied to forecast coal resources and reserves. Predictions using subsets of the entire dataset are also generated to understand the performance of methods under limited data constraints. Three-dimensional facies interpolation methods tend to overestimate coal resources and reserves due to interpolation smoothing. Facies simulation methods yield similar resource predictions than conventional thickness map approximations. Reserves predicted by facies simulation methods are mainly influenced by: a) the specific coal proportion threshold used to determine if a block can be recovered or not, and b) the capability of the modelling strategy to reproduce areal trends in coal proportions and splitting between coal-dominated and terrigenousdominated areas of the basin. Reserves predictions differ between the simulation methods, even with dense conditioning datasets. Simulation methods can be ranked according to the correlation of their outputs with predictions from the directly interpolated coal proportion maps: a) with low-density datasets sequential indicator simulation with trends yields the best correlation, b) with high-density datasets sequential indicator simulation with post-processing yields the best correlation, because the areal trends are provided implicitly by the dense conditioning data.
Resumo:
The goal of this project is the integration of a set of technologies (graphics, physical simulation, input), with the azm of assembling an application framework in phyton. In this research, a set of key introductory concepts are presented in adoption of a deep study of the state of the art of 3D applications. Phyton is selected an justified as the programing language due to the features and advantages that it offers in front of other languages. Finally the design and implementation of the framework is presented in the last chapter with some client application examples.
Resumo:
The geometric characterisation of tree orchards is a high-precision activity comprising the accurate measurement and knowledge of the geometry and structure of the trees. Different types of sensors can be used to perform this characterisation. In this work a terrestrial LIDAR sensor (SICK LMS200) whose emission source was a 905-nm pulsed laser diode was used. Given the known dimensions of the laser beam cross-section (with diameters ranging from 12 mm at the point of emission to 47.2 mm at a distance of 8 m), and the known dimensions of the elements that make up the crops under study (flowers, leaves, fruits, branches, trunks), it was anticipated that, for much of the time, the laser beam would only partially hit a foreground target/object, with the consequent problem of mixed pixels or edge effects. Understanding what happens in such situations was the principal objective of this work. With this in mind, a series of tests were set up to determine the geometry of the emitted beam and to determine the response of the sensor to different beam blockage scenarios. The main conclusions that were drawn from the results obtained were: (i) in a partial beam blockage scenario, the distance value given by the sensor depends more on the blocked radiant power than on the blocked surface area; (ii) there is an area that influences the measurements obtained that is dependent on the percentage of blockage and which ranges from 1.5 to 2.5 m with respect to the foreground target/object. If the laser beam impacts on a second target/object located within this range, this will affect the measurement given by the sensor. To interpret the information obtained from the point clouds provided by the LIDAR sensors, such as the volume occupied and the enclosing area, it is necessary to know the resolution and the process for obtaining this mesh of points and also to be aware of the problem associated with mixed pixels.
Resumo:
In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing.
Resumo:
This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
A method for dealing with monotonicity constraints in optimal control problems is used to generalize some results in the context of monopoly theory, also extending the generalization to a large family of principal-agent programs. Our main conclusion is that many results on diverse economic topics, achieved under assumptions of continuity and piecewise differentiability in connection with the endogenous variables of the problem, still remain valid after replacing such assumptions by two minimal requirements.
Resumo:
N = 1 designs imply repeated registrations of the behaviour of the same experimental unit and the measurements obtained are often few due to time limitations, while they are also likely to be sequentially dependent. The analytical techniques needed to enhance statistical and clinical decision making have to deal with these problems. Different procedures for analysing data from single-case AB designs are discussed, presenting their main features and revising the results reported by previous studies. Randomization tests represent one of the statistical methods that seemed to perform well in terms of controlling false alarm rates. In the experimental part of the study a new simulation approach is used to test the performance of randomization tests and the results suggest that the technique is not always robust against the violation of the independence assumption. Moreover, sensitivity proved to be generally unacceptably low for series lengths equal to 30 and 40. Considering the evidence available, there does not seem to be an optimal technique for single-case data analysis
Resumo:
Background: A holistic perspective on health implies giving careful consideration to the relationship between physical and mental health. In this regard the present study sought to determine the level of Positive Mental Health (PMH) among people with chronic physical health problems, and to examine the relationship between the observed levels of PMH and both physical health status and socio-demographic variables. Methods: The study was based on the Multifactor Model of Positive Mental Health (Lluch, 1999), which comprises six factors: Personal Satisfaction (F1), Prosocial Attitude (F2), Self-control (F3), Autonomy (F4), Problem-solving and Self-actualization (F5), and Interpersonal Relationship Skills (F6). The sample comprised 259 adults with chronic physical health problems who were recruited through a primary care center in the province of Barcelona (Spain). Positive mental health was assessed by means of the Positive Mental Health Questionnaire (Lluch, 1999). Results: Levels of PMH differed, either on the global scale or on specific factors, in relation to the following variables: age: global PMH scores decreased with age (r=-0.129; p=0.038); b) gender: men scored higher on F1 (t=2.203; p=0.028) and F4 (t=3.182; p=0.002), while women scored higher on F2 (t -3.086; p=0.002) and F6 (t=-2.744; p=0.007); c) number of health conditions: the fewer the number of health problems the higher the PMH score on F5 (r=-0.146; p=0.019); d) daily medication: polymedication patients had lower PMH scores, both globally and on various factors; e) use of analgesics: occasional use of painkillers was associated with higher PMH scores on F1 (t=-2.811; p=0.006). There were no significant differences in global PMH scores according to the type of chronic health condition. The only significant difference in the analysis by factors was that patients with hypertension obtained lower PMH scores on the factor Autonomy (t=2.165; p=0.032). Conclusions: Most people with chronic physical health problems have medium or high levels of PMH. The variables that adversely affect PMH are old age, polypharmacy and frequent consumption of analgesics. The type of health problem does not influence the levels of PMH. Much more extensive studies with samples without chronic pathology are now required in order to be able to draw more robust conclusions.
Resumo:
Little is known about how genetic and environmental factors contribute to the association between parental negativity and behavior problems from early childhood to adolescence. The current study fitted a cross-lagged model in a sample consisting of 4,075 twin pairs to explore (a) the role of genetic and environmental factors in the relationship between parental negativity and behavior problems from age 4 to age 12, (b) whether parent-driven and child-driven processes independently explain the association, and (c) whether there are sex differences in this relationship. Both phenotypes showed substantial genetic influence at both ages. The concurrent overlap between them was mainly accounted for by genetic factors. Causal pathways representing stability of the phenotypes and parent-driven and child-driven effects significantly and independently account for the association. Significant but slight differences were found between males and females for parent-driven effects. These results were highly similar when general cognitive ability was added as a covariate. In summary, the longitudinal association between parental negativity and behavior problems seems to be bidirectional and mainly accounted for by genetic factors. Furthermore, child-driven effects were mainly genetically mediated, and parent-driven effects were a function of both genetic and shared-environmental factors.
Resumo:
Reinsurance is one of the tools that an insurer can use to mitigate the underwriting risk and then to control its solvency. In this paper, we focus on the proportional reinsurance arrangements and we examine several optimization and decision problems of the insurer with respect to the reinsurance strategy. To this end, we use as decision tools not only the probability of ruin but also the random variable deficit at ruin if ruin occurs. The discounted penalty function (Gerber & Shiu, 1998) is employed to calculate as particular cases the probability of ruin and the moments and the distribution function of the deficit at ruin if ruin occurs.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.