925 resultados para Convex Duality


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adolescent Idiopathic Scoliosis (AIS) is the most common deformity of the spine, affecting 2-4% of the population. Previous studies have shown that the vertebrae in scoliotic spines undergo abnormal shape changes, however there has been little exploration of how scoliosis affects bone density distribution within the vertebrae. In this study, existing CT scans of 53 female idiopathic scoliosis patients with right-sided main thoracic curves were used to measure the lateral (right to left) bone density profile at mid-height through each vertebral body. Five key bone density profile measures were identified from each normalised bone density distribution, and multiple regression analysis was performed to explore the relationship between bone density distribution and patient demographics (age, height, weight, body mass index (BMI), skeletal maturity, time since Menarche, vertebral level, and scoliosis curve severity). Results showed a marked convex/concave asymmetry in bone density for vertebral levels at or near the apex of the scoliotic curve. At the apical vertebra, mean bone density at the left side (concave) cortical shell was 23.5% higher than for the right (convex) cortical shell, and cancellous bone density along the central 60% of the lateral path from convex to concave increased by 13.8%. The centre of mass of the bone density profile at the thoracic curve apex was located 53.8% of the distance along the lateral path, indicating a shift of nearly 4% toward the concavity of the deformity. These lateral bone density gradients tapered off when moving away from the apical vertebra. Multi-linear regressions showed that the right cortical shell peak bone density is significantly correlated with skeletal maturity, with each Risser increment corresponding to an increase in mineral equivalent bone density of 4-5%. There were also statistically significant relationships between patient height, weight and BMI, and the gradient of cancellous bone density along the central 60% of the lateral path. Bone density gradient is positively correlated with weight, and negatively correlated with height and BMI, such that at the apical vertebra, a unit decrease in BMI corresponds to an almost 100% increase in bone density gradient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction. Ideally after selective thoracic fusion for Lenke Class IC (i.e. major thoracic / secondary lumbar) curves, the lumbar spine will spontaneously accommodate to the corrected position of the thoracic curve, thereby achieving a balanced spine, avoiding the need for fusion of lumbar spinal segments1. The purpose of this study was to evaluate the behaviour of the lumbar curve in Lenke IC class adolescent idiopathic scoliosis (AIS) following video-assisted thoracoscopic spinal fusion and instrumentation (VATS) of the major thoracic curve. Methods. A retrospective review of 22 consecutive patients with AIS who underwent VATS by a single surgeon was conducted. The results were compared to published literature examining the behaviour of the secondary lumbar curve where other surgical approaches were employed. Results. Twenty-two patients (all female) with AIS underwent VATS. All major thoracic curves were right convex. The average age at surgery was 14 years (range 10 to 22 years). On average 6.7 levels (6 to 8) were instrumented. The mean follow-up was 25.1 months (6 to 36). The pre-operative major thoracic Cobb angle mean was 53.8° (40° to 75°). The pre-operative secondary lumbar Cobb angle mean was 43.9° (34° to 55°). On bending radiographs, the secondary curve corrected to 11.3° (0° to 35°). The rib hump mean measurement was 15.0° (7° to 21°). At latest follow-up the major thoracic Cobb angle measured on average 27.2° (20° to 41°) (p<0.001 – univariate ANOVA) and the mean secondary lumbar curve was 27.3° (15° to 42°) (p<0.001). This represented an uninstrumented secondary curve correction factor of 37.8%. The mean rib hump measured was 6.5° (2° to 15°) (p<0.001). The results above were comparable to published series when open surgery was performed. Discussion. VATS is an effective method of correcting major thoracic curves with secondary lumbar curves. The behaviour of the secondary lumbar curve is consistent with published series when open surgery, both anterior and posterior, is performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over recent years, Unmanned Air Vehicles or UAVs have become a powerful tool for reconnaissance and surveillance tasks. These vehicles are now available in a broad size and capability range and are intended to fly in regions where the presence of onboard human pilots is either too risky or unnecessary. This paper describes the formulation and application of a design framework that supports the complex task of multidisciplinary design optimisation of UAVs systems via evolutionary computation. The framework includes a Graphical User Interface (GUI), a robust Evolutionary Algorithm optimiser named HAPEA, several design modules, mesh generators and post-processing capabilities in an integrated platform. These population –based algorithms such as EAs are good for cases problems where the search space can be multi-modal, non-convex or discontinuous, with multiple local minima and with noise, and also problems where we look for multiple solutions via Game Theory, namely a Nash equilibrium point or a Pareto set of non-dominated solutions. The application of the methodology is illustrated on conceptual and detailed multi-criteria and multidisciplinary shape design problems. Results indicate the practicality and robustness of the framework to find optimal shapes and trade—offs between the disciplinary analyses and to produce a set of non dominated solutions of an optimal Pareto front to the designer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis explores a way to inform the architectural design process for contemporary workplace environments. It reports on both theoretical and practical outcomes through an exclusively Australian case study of a network enterprise comprised of collaborative, yet independent business entities. The internet revolution, substantial economic and cultural shifts, and an increased emphasis on lifestyle considerations have prompted a radical re-ordering of organisational relationships and the associated structures, processes, and places of doing business. The social milieu of the information age and the knowledge economy is characterised by an almost instantaneous flow of information and capital. This has culminated in a phenomenon termed by Manuel Castells as the network society, where physical locations are joined together by continuous communication and virtual connectivity. A new spatial logic encompassing redefined concepts of space and distance, and requiring a comprehensive shift in the approach to designing workplace environments for today’s adaptive, collaborative organisations in a dynamic business world, provides the backdrop for this research. Within the duality of space and an augmentation of the traditional notions of place, organisational and institutional structures pose new challenges for the design professions. The literature revealed that there has always been a mono-organisational focus in relation to workplace design strategies. The phenomenon of inter-organisational collaboration has enabled the identification of a gap in the knowledge relative to workplace design. This new context generated the formulation of a unique research construct, the NetWorkPlace™©, which captures the complexity of contemporary employment structures embracing both physical and virtual work environments and practices, and provided the basis for investigating the factors that are shaping and defining interactions within and across networked organisational settings. The methodological orientation and the methods employed follow a qualitative approach and an abductively driven strategy comprising two distinct components, a cross-sectional study of the whole of the network and a longitudinal study, focusing on a single discrete workplace site. The complexity of the context encountered dictated that a multi-dimensional investigative framework was required to be devised. The adoption of a pluralist ontology and the reconfiguration of approaches from traditional paradigms into a collaborative, trans-disciplinary, multi-method epistemology provided an explicit and replicatable method of investigation. The identification and introduction of the NetWorkPlace™© phenomenon, by necessity, spans a number of traditional disciplinary boundaries. Results confirm that in this context, architectural research, and by extension architectural practice, must engage with what other disciplines have to offer. The research concludes that no single disciplinary approach to either research or practice in this area of design can suffice. Pierre Bourdieau’s philosophy of ‘practice’ provides a framework within which the governance and technology structures, together with the mechanisms enabling the production of social order in this context, can be understood. This is achieved by applying the concepts of position and positioning to the corporate power dynamics, and integrating the conflict found to exist between enterprise standard and ferally conceived technology systems. By extending existing theory and conceptions of ‘place’ and the ‘person-environment relationship’, relevant understandings of the tensions created between Castells’ notions of the space of place and the space of flows are established. The trans-disciplinary approach adopted, and underpinned by a robust academic and practical framework, illustrates the potential for expanding the range and richness of understanding applicable to design in this context. The outcome informs workplace design by extending theoretical horizons, and by the development of a comprehensive investigative process comprising a suite of models and techniques for both architectural and interior design research and practice, collectively entitled the NetWorkPlace™© Application Framework. This work contributes to the body of knowledge within the design disciplines in substantive, theoretical, and methodological terms, whilst potentially also influencing future organisational network theories, management practices, and information and communication technology applications. The NetWorkPlace™© as reported in this thesis, constitutes a multi-dimensional concept having the capacity to deal with the fluidity and ambiguity characteristic of the network context, as both a topic of research and the way of going about it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An investigation of cylindrical iron rods burning in pressurised oxygen under microgravity conditions is presented. It has been shown that, under similar experimental conditions, the melting rate of a burning, cylindrical iron rod is higher in microgravity than in normal gravity by a factor of 1.8 ± 0.3. This paper presents microanalysis of quenched samples obtained in a microgravity environment in a 2.0 s duration drop tower facility in Brisbane, Australia. These images indicate that the solid/liquid interface is highly convex in reduced gravity, compared to the planar geometry typically observed in normal gravity, which increases the contact area between liquid and solid phases by a factor of 1.7 ± 0.1. Thus, there is good agreement between the proportional increase in solid/liquid interface surface area and melting rate in microgravity. This indicates that the cause of the increased melting rates for cylindrical iron rods burning in microgravity is altered interfacial geometry at the solid/liquid interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Concentrations of ultrafine (<0.1µm) particles (UFPs) and PM2.5 (<2.5µm) were measured whilst commuting along a similar route by train, bus, ferry and automobile in Sydney, Australia. One trip on each transport mode was undertaken during both morning and evening peak hours throughout a working week, for a total of 40 trips. Analyses comprised one-way ANOVA to compare overall (i.e. all trips combined) geometric mean concentrations of both particle fractions measured across transport modes, and assessment of both the correlation between wind speed and individual trip means of UFPs and PM2.5, and the correlation between the two particle fractions. Overall geometric mean concentrations of UFPs and PM2.5 ranged from 2.8 (train) to 8.4 (bus) × 104 particles cm-3 and 22.6 (automobile) to 29.6 (bus) µg m-3, respectively, and a statistically significant difference (p <0.001) between modes was found for both particle fractions. Individual trip geometric mean concentrations were between 9.7 × 103 (train) and 2.2 × 105 (bus) particles cm-3 and 9.5 (train) to 78.7 (train) µg m-3. Estimated commuter exposures were variable, and the highest return trip mean PM2.5 exposure occurred in the ferry mode, whilst the highest UFP exposure occurred during bus trips. The correlation between fractions was generally poor, and in keeping with the duality of particle mass and number emissions in vehicle-dominated urban areas. Wind speed was negatively correlated with, and a generally poor determinant of, UFP and PM2.5 concentrations, suggesting a more significant role for other factors in determining commuter exposure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article explores how the boards of small firms actually undertake to perform strategic tasks. Board strategic involvement has seldom been investigated in the context of small firms. We seek to make a contribution by investigating antecedents of board strategic involvement. The antecedents are “board working style” and “board quality attributes”, which go beyond the board composition features of board size, CEO duality, the ratio of non-executive to executive directors and ownership. Hypotheses were tested on a sample of 497 Norwegian firms (from 5 to 30 employees). Our results show that board working style and board quality attributes rather than board composition features enhance board strategic involvement. Moreover, board quality attributes outperform board working style in fostering board strategic involvement

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network has emerged from a contempory worldwide phenomenon, culturally manifested as a consequence of globalization and the knowledge economy. It is in this context that the internet revolution has prompted a radical re-ordering of social and institutional relations and the associated structures, processes and places which support them. Within the duality of virtual space and the augmentation of traditional notions of physical place, the organizational structures pose new challenges for the design professions. Technological developments increasingly permit communication anytime and anywhere, and provide the opportunity for both synchronous and asynchronous collaboration. The resultant ecology formed through the network enterprise has resulted in an often convolted and complex world wherein designers are forced to consider the relevance and meaning of this new context. The role of technology and that of space are thus interwined in the relation between the network and the individual workplace. This paper explores a way to inform the interior desgn process for contemporary workplace environments. It reports on both theoretical and practical outcomes through an Australia-wide case study of three collaborating, yet independent business entities. It further suggests the link between workplace design and successful business innovation being realized between partnering organizations in Great Britain. Evidence presented indicates that, for architects and interior designers, the scope of the problem has widened, the depth of knowledge required to provide solutions has increased, and the rules of engagement are required to change. The ontological and epistemological positions adopted in the study enabled the spatial dimensions to be examined from both within and beyond the confines of a traditional design only viewpoint. Importantly it highlights the significance of a trans-disiplinary collaboration in dealing with the multiple layers and complexity of the contemporary social and business world, from both a research and practice perspective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The success rate of carrier phase ambiguity resolution (AR) is the probability that the ambiguities are successfully fixed to their correct integer values. In existing works, an exact success rate formula for integer bootstrapping estimator has been used as a sharp lower bound for the integer least squares (ILS) success rate. Rigorous computation of success rate for the more general ILS solutions has been considered difficult, because of complexity of the ILS ambiguity pull-in region and computational load of the integration of the multivariate probability density function. Contributions of this work are twofold. First, the pull-in region mathematically expressed as the vertices of a polyhedron is represented by a multi-dimensional grid, at which the cumulative probability can be integrated with the multivariate normal cumulative density function (mvncdf) available in Matlab. The bivariate case is studied where the pull-region is usually defined as a hexagon and the probability is easily obtained using mvncdf at all the grid points within the convex polygon. Second, the paper compares the computed integer rounding and integer bootstrapping success rates, lower and upper bounds of the ILS success rates to the actual ILS AR success rates obtained from a 24 h GPS data set for a 21 km baseline. The results demonstrate that the upper bound probability of the ILS AR probability given in the existing literatures agrees with the actual ILS success rate well, although the success rate computed with integer bootstrapping method is a quite sharp approximation to the actual ILS success rate. The results also show that variations or uncertainty of the unit–weight variance estimates from epoch to epoch will affect the computed success rates from different methods significantly, thus deserving more attentions in order to obtain useful success probability predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presentation addresses design-based research that became a catalyst for social change among a disadvantaged school community. The aim of the longitudinal research was to protoype an evidence-based model for whole school digital and print literacy pedagogy renewal among students from low socioeconomic, Indigenous, and migrant backgrounds. Applying Anthony Gidden’s principle of the “duality of structure”, the paper presentation interprets how the collective agency of researchers and the school community began to transform the structural properties of the institution in a two-way dynamism, so that the structural properties of the school were not outside of individual action, but were implicated in its reproduction and transformation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.