950 resultados para class imbalance problems
Resumo:
This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.
As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.
One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.
Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.
Resumo:
This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.
Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.
However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.
It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.
With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.
Resumo:
This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.
Resumo:
This investigation deals with certain generalizations of the classical uniqueness theorem for the second boundary-initial value problem in the linearized dynamical theory of not necessarily homogeneous nor isotropic elastic solids. First, the regularity assumptions underlying the foregoing theorem are relaxed by admitting stress fields with suitably restricted finite jump discontinuities. Such singularities are familiar from known solutions to dynamical elasticity problems involving discontinuous surface tractions or non-matching boundary and initial conditions. The proof of the appropriate uniqueness theorem given here rests on a generalization of the usual energy identity to the class of singular elastodynamic fields under consideration.
Following this extension of the conventional uniqueness theorem, we turn to a further relaxation of the customary smoothness hypotheses and allow the displacement field to be differentiable merely in a generalized sense, thereby admitting stress fields with square-integrable unbounded local singularities, such as those encountered in the presence of focusing of elastic waves. A statement of the traction problem applicable in these pathological circumstances necessitates the introduction of "weak solutions'' to the field equations that are accompanied by correspondingly weakened boundary and initial conditions. A uniqueness theorem pertaining to this weak formulation is then proved through an adaptation of an argument used by O. Ladyzhenskaya in connection with the first boundary-initial value problem for a second-order hyperbolic equation in a single dependent variable. Moreover, the second uniqueness theorem thus obtained contains, as a special case, a slight modification of the previously established uniqueness theorem covering solutions that exhibit only finite stress-discontinuities.
Resumo:
In the problem of one-class classification (OCC) one of the classes, the target class, has to be distinguished from all other possible objects, considered as nontargets. In many biomedical problems this situation arises, for example, in diagnosis, image based tumor recognition or analysis of electrocardiogram data. In this paper an approach to OCC based on a typicality test is experimentally compared with reference state-of-the-art OCC techniques-Gaussian, mixture of Gaussians, naive Parzen, Parzen, and support vector data description-using biomedical data sets. We evaluate the ability of the procedures using twelve experimental data sets with not necessarily continuous data. As there are few benchmark data sets for one-class classification, all data sets considered in the evaluation have multiple classes. Each class in turn is considered as the target class and the units in the other classes are considered as new units to be classified. The results of the comparison show the good performance of the typicality approach, which is available for high dimensional data; it is worth mentioning that it can be used for any kind of data (continuous, discrete, or nominal), whereas state-of-the-art approaches application is not straightforward when nominal variables are present.
Resumo:
This paper presents explicit solutions for a class of decentralized LQG problems in which players communicate their states with delays. A method for decomposing the Bellman equation into a hierarchy of independent subproblems is introduced. Using this decomposition, all of the gains for the optimal controller are computed from the solution of a single algebraic Riccati equation. © 2012 AACC American Automatic Control Council).
Resumo:
This thesis investigates a new approach to lattice basis reduction suggested by M. Seysen. Seysen's algorithm attempts to globally reduce a lattice basis, whereas the Lenstra, Lenstra, Lovasz (LLL) family of reduction algorithms concentrates on local reductions. We show that Seysen's algorithm is well suited for reducing certain classes of lattice bases, and often requires much less time in practice than the LLL algorithm. We also demonstrate how Seysen's algorithm for basis reduction may be applied to subset sum problems. Seysen's technique, used in combination with the LLL algorithm, and other heuristics, enables us to solve a much larger class of subset sum problems than was previously possible.
Resumo:
Clare, A. and King R.D. (2002) Machine learning of functional class from phenotype data. Bioinformatics 18(1) 160-166
Resumo:
Plakhov, A.Y.; Gouveia, P.D.F., (2007) 'Problems of maximal mean resistance on the plane', Nonlinearity 20(9) pp.2271-2287 RAE2008
Resumo:
The thesis analyses the roles and experiences of female members of the Irish landed class (wives, sisters and daughters of gentry and aristocratic landlords with estates over 1,000 acres) using primary personal material generated by twelve sample families over an important period of decline for the class, and growing rights for women. Notably, it analyses the experiences of relatively unknown married and unmarried women, something previously untried in Irish historiography. It demonstrates that women’s roles were more significant than has been assumed in the existing literature, and leads to a more rounded understanding of the entire class. Four chapters focus on themes which emerge from the sources used and which deal with their roles both inside and outside the home. These chapters argue that: Married and unmarried women were more closely bound to the priorities of their class than their sex, and prioritised male-centred values of family and estate. Male and female duties on the property overlapped, as marriage relationships were more equal than the legislation of the time would suggest. London was the cultural centre for this class. Due to close familial links with Britain (60% of sample daughters married English men) their self-perception was British or English, as well as Irish. With the self-confidence of their class, these women enjoyed cultural and political activities and movements outside the home (sport, travel, fashion, art, writing, philanthropy, (anti-)suffrage, and politics). Far from being pawns in arranged marriages, women were deeply conscious of their marriage decisions and chose socially, financially and personally compatible husbands; they also looked for sexual satisfaction. Childbirth sometimes caused lasting health problems, but pregnancy did not confine wealthy women to an invalid state. In opposition to the stereotypical distant aristocratic mother, these women breastfed their children, and were involved mothers. However, motherhood was not permitted to impinge on the more pressing role of wife
Resumo:
We consider two “minimum”NP-hard job shop scheduling problems to minimize the makespan. In one of the problems every job has to be processed on at most two out of three available machines. In the other problem there are two machines, and a job may visit one of the machines twice. For each problem, we define a class of heuristic schedules in which certain subsets of operations are kept as blocks on the corresponding machines. We show that for each problem the value of the makespan of the best schedule in that class cannot be less than 3/2 times the optimal value, and present algorithms that guarantee a worst-case ratio of 3/2.
Resumo:
INTRODUCTION:
Class II malocclusion is often associated with retrognathic mandible. Some of these problems require surgical correction. The purposes of this study were to investigate treatment outcomes in patients with Class II malocclusions whose treatment included mandibular advancement surgery and to identify predictors of good outcomes.
METHODS:
Pretreatment and posttreatment cephalometric radiographs of 90 patients treated with mandibular advancement surgery by 57 consultant orthodontists in the United Kingdom before September 1998 were digitized, and cephalometric landmarks were identified. Paired samples t tests were used to compare the pretreatment and posttreatment cephalometric values for each patient. For each cephalometric variable, the proportion of patients falling within the ideal range was identified. Multiple logistic regression analysis was performed to identify predictors of achieving ideal range outcomes for the key skeletal (ANB and SNB angles), dental (overjet and overbite), and soft-tissue (Holdaway angle) measurements.
RESULTS:
An overjet within the ideal range of 1 to 4 mm was achieved in 72% of patients and was more likely with larger initial ANB angles. Horizontal correction of the incisor relationship was achieved by a combination of 75% skeletal movement and 25% dentoalveolar change. An ideal posttreatment ANB angle was achieved in 42% of patients and was more likely in females and those with larger pretreatment ANB angles. Ideal soft-tissue Holdaway angles (7 degrees to 14 degrees ) were achieved in 49% of patients and were more likely in females and those with smaller initial SNA angles. Mandibular incisor decompensation was incomplete in 28% of patients and was more likely in females and patients with greater pretreatment mandibular incisor proclination. Correction of increased overbite was generally successful, although anterior open bites were found in 16% of patients at the end of treatment. These patients were more likely to have had initial open bites.
CONCLUSIONS:
Mandibular surgery had a good success rate in normalizing the main dental and skeletal relationships. Less ideal soft-tissue profile outcomes were associated with larger pretreatment SNA-angle values, larger final mandibular incisor inclinations, and smaller final maxillary incisor inclinations. The use of mandibular surgery to correct anterior open bite was associated with poor outcomes.
Resumo:
Objectives: This study examined the validity of a latent class typology of adolescent drinking based on four alcohol dimensions; frequency of drinking, quantity consumed, frequency of binge drinking and the number of alcohol related problems encountered. Method: Data used were from the 1970 British Cohort Study sixteen-year-old follow-up. Partial or complete responses to the selected alcohol measures were provided by 6,516 cohort members. The data were collected via a series of postal questionnaires. Results: A five class LCA typology was constructed. Around 12% of the sample were classified as �hazardous drinkers� reporting frequent drinking, high levels of alcohol consumed, frequent binge drinking and multiple alcohol related problems. Multinomial logistic regression, with multiple imputation for missing data, was used to assess the covariates of adolescent drinking patterns. Hazardous drinking was associated with being white, being male, having heavy drinking parents (in particular fathers), smoking, illicit drug use, and minor and violent offending behaviour. Non-significant associations were found between drinking patterns and general mental health and attention deficient disorder. Conclusion: The latent class typology exhibited concurrent validity in terms of its ability to distinguish respondents across a number of alcohol and non-alcohol indicators. Notwithstanding a number of limitations, latent class analysis offers an alternative data reduction method for the construction of drinking typologies that addresses known weaknesses inherent in more tradition classification methods.
Resumo:
In recent times the sociology of childhood has played an important role in challenging the dominance of Piagetian models of child development in shaping the way we think about children and childhood. What such work has successfully achieved is to increase our understanding of the socially constructed nature of childhood; the social competence and agency of children; and the diverse nature of children’s lives, reflecting the very different social contexts within which they are located. One of the problems that has tended to be associated with this work, however, is that in its critique of developmentalism it has tended simply to replace one orthodoxy (psychology) with another (sociology) rather than providing the opportunity to transcend this divide. The purpose of this paper is to demonstrate some of the potential ways in which the sociological/psychological divide might be transcended and the benefits of this for understanding, more fully, the ‘production’ of children’s schooling identities. In particular it shows how some of the key sociological insights to be found in the work of Bourdieu may be usefully extended by the work inspired by the developmental psychologist, Vygotsky. The key arguments are illustrated by reference to ethnographic data relating to the schooling experiences and identities of a group of 5-6 year old working class boys.