945 resultados para (2D)2PCA


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we presented an automatic system for precise urban road model reconstruction based on aerial images with high spatial resolution. The proposed approach consists of two steps: i) road surface detection and ii) road pavement marking extraction. In the first step, support vector machine (SVM) was utilized to classify the images into two categories: road and non-road. In the second step, road lane markings are further extracted on the generated road surface based on 2D Gabor filters. The experiments using several pan-sharpened aerial images of Brisbane, Queensland have validated the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cell based therapies for bone regeneration are an exciting emerging technology, but the availability of osteogenic cells is limited and an ideal cell source has not been identified. Amniotic fluid-derived stem (AFS) cells and bone-marrow derived mesenchymal stem cells (MSCs) were compared to determine their osteogenic differentiation capacity in both 2D and 3D environments. In 2D culture, the AFS cells produced more mineralized matrix but delayed peaks in osteogenic markers. Cells were also cultured on 3D scaffolds constructed of poly-e-caprolactone for 15 weeks. MSCs differentiated more quickly than AFS cells on 3D scaffolds, but mineralized matrix production slowed considerably after 5 weeks. In contrast, the rate of AFS cell mineralization continued to increase out to 15 weeks, at which time AFS constructs contained 5-fold more mineralized matrix than MSC constructs. Therefore, cell source should be taken into consideration when used for cell therapy, as the MSCs would be a good choice for immediate matrix production, but the AFS cells would continue robust mineralization for an extended period of time. This study demonstrates that stem cell source can dramatically influence the magnitude and rate of osteogenic differentiation in vitro.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bone healing is known to occur through the successive formation and resorption of various tissues with different structural and mechanical properties. To get a better insight into this sequence of events, we used environmental scanning electron microscopy (ESEM) together with scanning small-angle X-ray scattering (sSAXS) to reveal the size and orientation of bone mineral particles within the regenerating callus tissues at different healing stages (2, 3, 6, and 9 weeks). Sections of 200 µm were cut from embedded blocks of midshaft tibial samples in a sheep osteotomy model with an external fixator. Regions of interest on the medial side of the proximal fragment were chosen to be the periosteal callus, middle callus, intercortical callus, and cortex. Mean thickness (T parameter), degree of alignment (ρ parameter), and predominant orientation (ψ parameter) of mineral particles were deduced from resulting sSAXS patterns with a spatial resolution of 200 µm. 2D maps of T and ρ overlapping with ESEM images revealed that the callus formation occurred in two waves of bone formation, whereby a highly disordered mineralized tissue was deposited first, followed by a bony tissue with more lamellar appearance in the ESEM and where the mineral particles were more aligned, as revealed by sSAXS. As a consequence, degree of alignment and mineral particle size within the callus increased with healing time, whereas at any given moment there were structural gradients, for example, from periosteal toward the middle callus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I am sure you’ve heard it too: Green is the new Black. While this was true back in the days when Henry Ford introduced process standardization with his assembly line for the Ford Model T (over 15 million of these were sold!), Green is also the color of choice for many business organizations, private and public. I am not talking about the actual color of their business shirts or their logo 2.0.; I am referring to the eco-aware movement that has pushed sustainability into the top ten list of business buzz-words. What used to be a boutique market for tourism and political activists has become the biggest business revolution since the e-commerce boom. Public and private organizations alike push towards “sustainable” solutions and practices. That push is partly triggered by the immense reputational gains associated with branding your organization as “green”, and partly by emerging societal, legal and constitutional regulations that force organizations to become more ecologically aware and sustainable. But the boom goes beyond organizational reality. Even in academia, sustainability has become a research “fashion wave” (see [1] if you are interested in research fashion waves) similar to the hype around Neuroscience that our colleagues in the natural sciences are witnessing these days. Mind you, I’m a fan. A big fan in fact. As academics, we are constantly searching for problem areas that are characterized by an opportunity to do rigorous research (studies that are executed to perfection) on relevant topics (studies that have applied practical value and provide impact to the community). What would be a better playground than exploring the options that Business Process Management provides for creating a sustainable, green future? I’m getting excited just writing about this! So, join me in exploring some of the current thoughts around how BPM can contribute to the sustainability fashion parade and let me introduce you to some of the works that scholars have produced recently in their attempts to identify solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of stable isotope ratios δ18O and δ2H are well established in assessment of groundwater systems and their hydrology. The conventional approach is based on x/y plots and relation to various MWL’s, and plots of either ratio against parameters such as Clor EC. An extension of interpretation is the use of 2D maps and contour plots, and 2D hydrogeological vertical sections. An enhancement of presentation and interpretation is the production of “isoscapes”, usually as 2.5D surface projections. We have applied groundwater isotopic data to a 3D visualisation, using the alluvial aquifer system of the Lockyer Valley. The 3D framework is produced in GVS (Groundwater Visualisation System). This format enables enhanced presentation by displaying the spatial relationships and allowing interpolation between “data points” i.e. borehole screened zones where groundwater enters. The relative variations in the δ18O and δ2H values are similar in these ambient temperature systems. However, δ2H better reflects hydrological processes, whereas δ18O also reflects aquifer/groundwater exchange reactions. The 3D model has the advantage that it displays borehole relations to spatial features, enabling isotopic ratios and their values to be associated with, for example, bedrock groundwater mixing, interaction between aquifers, relation to stream recharge, and to near-surface and return irrigation water evaporation. Some specific features are also shown, such as zones of leakage of deeper groundwater (in this case with a GAB signature). Variations in source of recharging water at a catchment scale can be displayed. Interpolation between bores is not always possible depending on numbers and spacing, and by elongate configuration of the alluvium. In these cases, the visualisation uses discs around the screens that can be manually expanded to test extent or intersections. Separate displays are used for each of δ18O and δ2H and colour coding for isotope values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Lockyer Valley in southeast Queensland supports important and intensive irrigation which is dependant on the quality and availability of groundwater. Prolonged drought conditions from ~1997 resulted in a depletion of the alluvial aquifers, and concern for the long-term sustainability of this resource. By 2008, many areas of the valley were at < 20% of storage. Some relief occurred with rain events in early 2009, then in December 2010 - January 2011, most of southeast Queensland experienced unprecedented flooding. These storm-based events have caused a shift in research focus from investigations of drought conditions and mitigation to flood response analysis. For the alluvial aquifer system of the valley, a preliminary assessment of groundwater observation bore data, prior to and during the flood, indicates that there is a spatially variable aquifer response. While water levels in some bores screened in unconfined shallow aquifers have recovered by more than 10 m within a short period of time (months), others show only a small or moderate response. Measurements of pre- and post-flood groundwater levels and high-resolution time-series records from data loggers are considered within the framework of a 3D geological model of the Lockyer Valley using Groundwater Visualisation System(GVS). Groundwater level fluctuations covering both drought and flood periods are used to estimate groundwater recharge using the water table fluctuation method (WTF), supplemented by estimates derived using chloride mass balance. The presentation of hydraulic and recharge information in a 3D format has considerable advantages over the traditional 2D presentation of data. The 3D approach allows the distillation of multiple types of information(topography, geological, hydraulic and spatial) into one representation that provides valuable insights into the major controls of groundwater flow and recharge. The influence of aquifer lithology on the spatial variability of groundwater recharge is also demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traders in the financial world are assessed by the amount of money they make and, increasingly, by the amount of money they make per unit of risk taken, a measure known as the Sharpe Ratio. Little is known about the average Sharpe Ratio among traders, but the Efficient Market Hypothesis suggests that traders, like asset managers, should not outperform the broad market. Here we report the findings of a study conducted in the City of London which shows that a population of experienced traders attain Sharpe Ratios significantly higher than the broad market. To explain this anomaly we examine a surrogate marker of prenatal androgen exposure, the second-to-fourth finger length ratio (2D:4D), which has previously been identified as predicting a trader's long term profitability. We find that it predicts the amount of risk taken by traders but not their Sharpe Ratios. We do, however, find that the traders' Sharpe Ratios increase markedly with the number of years they have traded, a result suggesting that learning plays a role in increasing the returns of traders. Our findings present anomalous data for the Efficient Markets Hypothesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous work on pattern-forming dynamics of team sports has investigated sub-phases of basketball and rugby union by focussing on one-versus-one (1v1) attacker-defender dyads. This body of work has identified the role of candidate control parameters, interpersonal distance and relative velocity, in predicting the outcomes of team player interactions. These two control parameters have been described as functioning in a nested relationship where relative velocity between players comes to the fore within a critical range of interpersonal distance. The critical influence of constraints on the intentionality of player behaviour has also been identified through the study of 1v1 attacker-defender dyads. This thesis draws from previous work adopting an ecological dynamics approach, which encompasses both Dynamical Systems Theory and Ecological Psychology concepts, to describe attacker-defender interactions in 1v1 dyads in association football. Twelve male youth association football players (average age 15.3 ± 0.5 yrs) performed as both attackers and defenders in 1v1 dyads in three field positions in an experimental manipulation of the proximity to goal and the role of players. Player and ball motion was tracked using TACTO 8.0 software (Fernandes & Caixinha, 2003) to produce two-dimensional (2D) trajectories of players and the ball on the ground. Significant differences were found for player-to-ball interactions depending on proximity to goal manipulations, indicating how key reference points in the environment such as the location of the goal may act as a constraint that shapes decision-making behaviour. Results also revealed that interpersonal distance and relative velocity alone were insufficient for accurately predicting the outcome of a dyad in association football. Instead, combined values of interpersonal distance, ball-to-defender distance, attacker-to-ball distance, attacker-to-ball relative velocity and relative angles were found to indicate the state of dyad outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new algorithm for extracting features from images for object recognition is described. The algorithm uses higher order spectra to provide desirable invariance properties, to provide noise immunity, and to incorporate nonlinearity into the feature extraction procedure thereby allowing the use of simple classifiers. An image can be reduced to a set of 1D functions via the Radon transform, or alternatively, the Fourier transform of each 1D projection can be obtained from a radial slice of the 2D Fourier transform of the image according to the Fourier slice theorem. A triple product of Fourier coefficients, referred to as the deterministic bispectrum, is computed for each 1D function and is integrated along radial lines in bifrequency space. Phases of the integrated bispectra are shown to be translation- and scale-invariant. Rotation invariance is achieved by a regrouping of these invariants at a constant radius followed by a second stage of invariant extraction. Rotation invariance is thus converted to translation invariance in the second step. Results using synthetic and actual images show that isolated, compact clusters are formed in feature space. These clusters are linearly separable, indicating that the nonlinearity required in the mapping from the input space to the classification space is incorporated well into the feature extraction stage. The use of higher order spectra results in good noise immunity, as verified with synthetic and real images. Classification of images using the higher order spectra-based algorithm compares favorably to classification using the method of moment invariants

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gait recognition approaches continue to struggle with challenges including view-invariance, low-resolution data, robustness to unconstrained environments, and fluctuating gait patterns due to subjects carrying goods or wearing different clothes. Although computationally expensive, model based techniques offer promise over appearance based techniques for these challenges as they gather gait features and interpret gait dynamics in skeleton form. In this paper, we propose a fast 3D ellipsoidal-based gait recognition algorithm using a 3D voxel model derived from multi-view silhouette images. This approach directly solves the limitations of view dependency and self-occlusion in existing ellipse fitting model-based approaches. Voxel models are segmented into four components (left and right legs, above and below the knee), and ellipsoids are fitted to each region using eigenvalue decomposition. Features derived from the ellipsoid parameters are modeled using a Fourier representation to retain the temporal dynamic pattern for classification. We demonstrate the proposed approach using the CMU MoBo database and show that an improvement of 15-20% can be achieved over a 2D ellipse fitting baseline.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have previously reported that novel vitronectin:growth factor (VN:GF) complexes significantly increase re-epithelialization in a porcine deep dermal partial-thickness burn model. However, the potential exists to further enhance the healing response through combination with an appropriate delivery vehicle which facilitates sustained local release and reduced doses of VN:GF complexes. Hyaluronic acid (HA), an abundant constituent of the interstitium, is known to function as a reservoir for growth factors and other bioactive species. The physicochemical properties of HA confer it with an ability to sustain elevated pericellular concentrations of these species. This has been proposed to arise via HA prolonging interactions of the bioactive species with cell surface receptors and/or protecting them from degradation. In view of this, the potential of HA to facilitate the topical delivery of VN:GF complexes was evaluated. Two-dimensional (2D) monolayer cell cultures and 3D de-epidermised dermis (DED) human skin equivalent (HSE) models were used to test skin cell responses to HA and VN:GF complexes. Our 2D studies revealed that VN:GF complexes and HA stimulate the proliferation of human fibroblasts but not keratinocytes. Experiments in our 3D DED-HSE models showed that VN:GF complexes, both alone and in conjunction with HA, led to enhanced development of both the proliferative and differentiating layers in the DED-HSE models. However, there was no significant difference between the thicknesses of the epidermis treated with VN:GF complexes alone and VN:GF complexes together with HA. While the addition of HA did not enhance all the cellular responses to VN:GF complexes examined, it was not inhibitory, and may confer other advantages related to enhanced absorption and transport that could be beneficial in delivery of the VN:GF complexes to wounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports an investigation of primary school children’s understandings about "square". 12 students participated in a small group teaching experiment session, where they were interviewed and guided to construct a square in a 3D virtual reality learning environment (VRLE). Main findings include mixed levels of "quasi" geometrical understandings, misconceptions about length and angles, and ambiguous uses of geometrical language for location, direction, and movement. These have implications for future teaching and learning about 2D shapes with particular reference to VRLE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An automatic approach to road lane marking extraction from high-resolution aerial images is proposed, which can automatically detect the road surfaces in rural areas based on hierarchical image analysis. The procedure is facilitated by the road centrelines obtained from low-resolution images. The lane markings are further extracted on the generated road surfaces with 2D Gabor filters. The proposed method is applied on the aerial images of the Bruce Highway around Gympie, Queensland. Evaluation of the generated road surfaces and lane markings using four representative test fields has validated the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the availability of 3D full body scanners and the associated software systems for operations with large point clouds, 3D anthropometry has been marketed as a breakthrough and milestone in ergonomic design. The assumptions made by the representatives of the 3D paradigm need to be critically reviewed though. 3D anthropometry has advantages as well as shortfalls, which need to be carefully considered. While it is apparent that the measurement of a full body point cloud allows for easier storage of raw data and improves quality control, the difficulties in calculation of standardized measurements from the point cloud are widely underestimated. Early studies that made use of 3D point clouds to derive anthropometric dimensions have shown unacceptable deviations from the standardized results measured manually. While 3D human point clouds provide a valuable tool to replicate specific single persons for further virtual studies, or personalize garment, their use in ergonomic design must be critically assessed. Ergonomic, volumetric problems are defined by their 2-dimensional boundary or one dimensional sections. A 1D/2D approach is therefore sufficient to solve an ergonomic design problem. As a consequence, all modern 3D human manikins are defined by the underlying anthropometric girths (2D) and lengths/widths (1D), which can be measured efficiently using manual techniques. Traditionally, Ergonomists have taken a statistical approach to design for generalized percentiles of the population rather than for a single user. The underlying method is based on the distribution function of meaningful single and two-dimensional anthropometric variables. Compared to these variables, the distribution of human volume has no ergonomic relevance. On the other hand, if volume is to be seen as a two-dimensional integral or distribution function of length and girth, the calculation of combined percentiles – a common ergonomic requirement - is undefined. Consequently, we suggest to critically review the cost and use of 3D anthropometry. We also recommend making proper use of widely available single and 2-dimensional anthropometric data in ergonomic design.