287 resultados para size accuracy
Resumo:
Autonomous underwater gliders are robust and widely-used ocean sampling platforms that are characterized by their endurance, and are one of the best approaches to gather subsurface data at the appropriate spatial resolution to advance our knowledge of the ocean environment. Gliders generally do not employ sophisticated sensors for underwater localization, but instead dead-reckon between set waypoints. Thus, these vehicles are subject to large positional errors between prescribed and actual surfacing locations. Here, we investigate the implementation of a large-scale, regional ocean model into the trajectory design for autonomous gliders to improve their navigational accuracy. We compute the dead-reckoning error for our Slocum gliders, and compare this to the average positional error recorded from multiple deployments conducted over the past year. We then compare trajectory plans computed on-board the vehicle during recent deployments to our prediction-based trajectory plans for 140 surfacing occurrences.
Resumo:
Bone healing is known to occur through the successive formation and resorption of various tissues with different structural and mechanical properties. To get a better insight into this sequence of events, we used environmental scanning electron microscopy (ESEM) together with scanning small-angle X-ray scattering (sSAXS) to reveal the size and orientation of bone mineral particles within the regenerating callus tissues at different healing stages (2, 3, 6, and 9 weeks). Sections of 200 µm were cut from embedded blocks of midshaft tibial samples in a sheep osteotomy model with an external fixator. Regions of interest on the medial side of the proximal fragment were chosen to be the periosteal callus, middle callus, intercortical callus, and cortex. Mean thickness (T parameter), degree of alignment (ρ parameter), and predominant orientation (ψ parameter) of mineral particles were deduced from resulting sSAXS patterns with a spatial resolution of 200 µm. 2D maps of T and ρ overlapping with ESEM images revealed that the callus formation occurred in two waves of bone formation, whereby a highly disordered mineralized tissue was deposited first, followed by a bony tissue with more lamellar appearance in the ESEM and where the mineral particles were more aligned, as revealed by sSAXS. As a consequence, degree of alignment and mineral particle size within the callus increased with healing time, whereas at any given moment there were structural gradients, for example, from periosteal toward the middle callus.
Resumo:
Intelligible and accurate risk-based decision-making requires a complex balance of information from different sources, appropriate statistical analysis of this information and consequent intelligent inference and decisions made on the basis of these analyses. Importantly, this requires an explicit acknowledgement of uncertainty in the inputs and outputs of the statistical model. The aim of this paper is to progress a discussion of these issues in the context of several motivating problems related to the wider scope of agricultural production. These problems include biosecurity surveillance design, pest incursion, environmental monitoring and import risk assessment. The information to be integrated includes observational and experimental data, remotely sensed data and expert information. We describe our efforts in addressing these problems using Bayesian models and Bayesian networks. These approaches provide a coherent and transparent framework for modelling complex systems, combining the different information sources, and allowing for uncertainty in inputs and outputs. While the theory underlying Bayesian modelling has a long and well established history, its application is only now becoming more possible for complex problems, due to increased availability of methodological and computational tools. Of course, there are still hurdles and constraints, which we also address through sharing our endeavours and experiences.
Resumo:
Digital forensic examiners often need to identify the type of a file or file fragment based only on the content of the file. Content-based file type identification schemes typically use a byte frequency distribution with statistical machine learning to classify file types. Most algorithms analyze the entire file content to obtain the byte frequency distribution, a technique that is inefficient and time consuming. This paper proposes two techniques for reducing the classification time. The first technique selects a subset of features based on the frequency of occurrence. The second speeds classification by sampling several blocks from the file. Experimental results demonstrate that up to a fifteen-fold reduction in file size analysis time can be achieved with limited impact on accuracy.
Resumo:
This paper presents a method for measuring the in-bucket payload volume on a dragline excavator for the purpose of estimating the material's bulk density in real-time. Knowledge of the payload's bulk density can provide feedback to mine planning and scheduling to improve blasting and therefore provide a more uniform bulk density across the excavation site. This allows a single optimal bucket size to be used for maximum overburden removal per dig and in turn reduce costs and emissions in dragline operation and maintenance. The proposed solution uses a range bearing laser to locate and scan full buckets between the lift and dump stages of the dragline cycle. The bucket is segmented from the scene using cluster analysis, and the pose of the bucket is calculated using the Iterative Closest Point (ICP) algorithm. Payload points are identified using a known model and subsequently converted into a height grid for volume estimation. Results from both scaled and full scale implementations show that this method can achieve an accuracy of above 95%.
Resumo:
The aim of the study is to establish optimum building aspect ratios and south window sizes of residential buildings from thermal performance point of view. The effects of 6 different building aspect ratios and eight different south window sizes for each building aspect ratio are analyzed for apartments located at intermediate floors of buildings, by the aid of the computer based thermal analysis program SUNCODE-PC in five cities of Turkey: Erzurum, Ankara, Diyarbakir, Izmir, and Antalya. The results are evaluated in terms of annual energy consumption and the optimum values are driven. Comparison of optimum values and the total energy consumption rates is made among the analyzed cities.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.