992 resultados para Spectral Gap Problems
Resumo:
This thesis is concerned with various aspects of Air Pollution due to smell, the impact it has on communities exposed to it, the means by which it may be controlled and the manner in which a local authority may investigate the problems it causes. The approach is a practical one drawing on examples occurring within a Local Authority's experience and for that reason the research is anecdotal and is not a comprehensive treatise on the full range of options available. Odour Pollution is not yet a well organised discipline and might be considered esoteric as it is necessary to incorporate elements of science and the humanities. It has been necessary to range widely across a number of aspects of the subject so that discussion is often restricted but many references have been included to enable a reader to pursue a particular point in greater depth. In a `fuzzy' subject there is often a yawning gap separating theory and practice, thus case studies have been used to illustrate the interplay of various disciplines in resolution of a problem. The essence of any science is observation and measurement. Observation has been made of the spread of odour pollution through a community and also of relevant meterological data so that a mathematical model could be constructed and its predictions checked. It has been used to explore the results of some options for odour control. Measurements of odour perception and human behaviour seldom have the precision and accuracy of the physical sciences. However methods of social research enabled individual perception of odour pollution to be quantified and an insight gained into reaction of a community exposed to it. Odours have four attributes that can be measured and together provide a complete description of its perception. No objective techniques of measurement have yet been developed but in this thesis simple, structured procedures of subjective assessment have been improvised and their use enabled the functioning of the components of an odour control system to be assessed. Such data enabled the action of the system to be communicated using terms that are understood by a non specialist audience.
Resumo:
The first part of the thesis compares Roth's method with other methods, in particular the method of separation of variables and the finite cosine transform method, for solving certain elliptic partial differential equations arising in practice. In particular we consider the solution of steady state problems associated with insulated conductors in rectangular slots. Roth's method has two main disadvantages namely the slow rate of convergence of the double Fourier series and the restrictive form of the allowable boundary conditions. A combined Roth-separation of variables method is derived to remove the restrictions on the form of the boundary conditions and various Chebyshev approximations are used to try to improve the rate of convergence of the series. All the techniques are then applied to the Neumann problem arising from balanced rectangular windings in a transformer window. Roth's method is then extended to deal with problems other than those resulting from static fields. First we consider a rectangular insulated conductor in a rectangular slot when the current is varying sinusoidally with time. An approximate method is also developed and compared with the exact method.The approximation is then used to consider the problem of an insulated conductor in a slot facing an air gap. We also consider the exact method applied to the determination of the eddy-current loss produced in an isolated rectangular conductor by a transverse magnetic field varying sinusoidally with time. The results obtained using Roth's method are critically compared with those obtained by other authors using different methods. The final part of the thesis investigates further the application of Chebyshdev methods to the solution of elliptic partial differential equations; an area where Chebyshev approximations have rarely been used. A poisson equation with a polynomial term is treated first followed by a slot problem in cylindrical geometry.
Resumo:
This study has concentrated on the development of an impact simulation model for use at the sub-national level. The necessity for the development of this model was demonstrated by the growth of local economic initiatives during the 1970's, and the lack of monitoring and evaluation exercise to assess their success and cost-effectiveness. The first stage of research involved the confirmation that the potential for micro-economic and spatial initiatives existed. This was done by identifying the existence of involuntary structural unemployment. The second stage examined the range of employment policy options from the macroeconomic, micro-economic and spatial perspectives, and focused on the need for evaluation of those policies. The need for spatial impact evaluation exercise in respect of other exogenous shocks, and structural changes was also recognised. The final stage involved the investigation of current techniques of evaluation and their adaptation for the purpose in hand. This led to a recognition of a gap in the armoury of techniques. The employment-dependency model has been developed to fill that gap, providing a low-budget model, capable of implementation at the small area level and generating a vast array of industrially disaggregate data, in terms of employment, employment-income, profits, value-added and gross income, related to levels of United Kingdom final demand. Thus providing scope for a variety of impact simulation exercises.
Resumo:
The main objective of the study was to investigate the relationship between parent-related, acculturation-related, and substance use-related variables found within individual, familial/parental, peer and school adolescent ecological domains, in a clinical sample (i.e. adolescents who met criteria for a Diagnostic Statistical Manual-IV [DSMIV] clinical diagnosis of substance abuse/dependence) of Hispanic adolescents from Miami, Florida. ^ The sample for this study consisted of 94 adolescent-mother pairs. The adolescent sample was 65% male, and 35% female, with a mean age of 15 years. More than half of the adolescents were born in the United States (60%) and had resided in the U.S. for an average of 12 years; 80% of the caregivers (primarily mothers) were foreign-born and lived in the U.S. for an average of 21 years. ^ Correlation and hierarchical regression were used to answer the research questions. The findings indicate that the hypothesized model and corresponding anticipated effect of the relationship between parental school and peer involvement on adolescents’ frequency of alcohol, marijuana and cocaine use was not supported by the data. Parental “acculturation-related” variables did not explain any of the variance in adolescent substance use frequency in this sample. Mediation and moderation models were not supported either. However, some interesting relationships were found: ^ The larger the acculturation gap, the lower the parental involvement in school tended to be (r = -.21, p < .05). Adolescents who experienced a greater acculturation gap with their parents (-.81, p >.01) had an earlier onset of marijuana (-.33, p < .01) and cocaine use (r = -.24, p <.01). The less acculturated parents experienced more parenting stress (r = -.31, p = < .01). Attachment was positively associated with parental peer involvement (r = .24, p < .05) and inversely associated with parenting acculturative stress (r = -.24, p < .05). Attachment was also positively associated with marijuana (r = .39, p < .01) and cocaine use (r = .33, p < .01). Adolescent males reported being more attached to their mothers when compared to adolescent females (r = .22, p >.05), they also reported using marijuana more frequently than females (.21, p >.05). ^
Resumo:
This dissertation delivers a framework to diagnose the Bull-Whip Effect (BWE) in supply chains and then identify methods to minimize it. Such a framework is needed because in spite of the significant amount of literature discussing the bull-whip effect, many companies continue to experience the wide variations in demand that are indicative of the bull-whip effect. While the theory and knowledge of the bull-whip effect is well established, there still is the lack of an engineering framework and method to systematically identify the problem, diagnose its causes, and identify remedies. ^ The present work seeks to fill this gap by providing a holistic, systems perspective to bull-whip identification and diagnosis. The framework employs the SCOR reference model to examine the supply chain processes with a baseline measure of demand amplification. Then, research of the supply chain structural and behavioral features is conducted by means of the system dynamics modeling method. ^ The contribution of the diagnostic framework, is called Demand Amplification Protocol (DAMP), relies not only on the improvement of existent methods but also contributes with original developments introduced to accomplish successful diagnosis. DAMP contributes a comprehensive methodology that captures the dynamic complexities of supply chain processes. The method also contributes a BWE measurement method that is suitable for actual supply chains because of its low data requirements, and introduces a BWE scorecard for relating established causes to a central BWE metric. In addition, the dissertation makes a methodological contribution to the analysis of system dynamic models with a technique for statistical screening called SS-Opt, which determines the inputs with the greatest impact on the bull-whip effect by means of perturbation analysis and subsequent multivariate optimization. The dissertation describes the implementation of the DAMP framework in an actual case study that exposes the approach, analysis, results and conclusions. The case study suggests a balanced solution between costs and demand amplification can better serve both firms and supply chain interests. Insights pinpoint to supplier network redesign, postponement in manufacturing operations and collaborative forecasting agreements with main distributors.^
Resumo:
Commonly used paradigms for studying child psychopathology emphasize individual-level factors and often neglect the role of context in shaping risk and protective factors among children, families, and communities. To address this gap, we evaluated influences of ecocultural contextual factors on definitions, development of, and responses to child behavior problems and examined how contextual knowledge can inform culturally responsive interventions. We drew on Super and Harkness' "developmental niche" framework to evaluate the influences of physical and social settings, childcare customs and practices, and parental ethnotheories on the definitions, development of, and responses to child behavior problems in a community in rural Nepal. Data were collected between February and October 2014 through in-depth interviews with a purposive sampling strategy targeting parents (N = 10), teachers (N = 6), and community leaders (N = 8) familiar with child-rearing. Results were supplemented by focus group discussions with children (N = 9) and teachers (N = 8), pile-sort interviews with mothers (N = 8) of school-aged children, and direct observations in homes, schools, and community spaces. Behavior problems were largely defined in light of parents' socialization goals and role expectations for children. Certain physical settings and times were seen to carry greater risk for problematic behavior when children were unsupervised. Parents and other adults attempted to mitigate behavior problems by supervising them and their social interactions, providing for their physical needs, educating them, and through a shared verbal reminding strategy (samjhaune). The findings of our study illustrate the transactional nature of behavior problem development that involves context-specific goals, roles, and concerns that are likely to affect adults' interpretations and responses to children's behavior. Ultimately, employing a developmental niche framework will elucidate setting-specific risk and protective factors for culturally compelling intervention strategies.
Resumo:
Empirical studies of education programs and systems, by nature, rely upon use of student outcomes that are measurable. Often, these come in the form of test scores. However, in light of growing evidence about the long-run importance of other student skills and behaviors, the time has come for a broader approach to evaluating education. This dissertation undertakes experimental, quasi-experimental, and descriptive analyses to examine social, behavioral, and health-related mechanisms of the educational process. My overarching research question is simply, which inside- and outside-the-classroom features of schools and educational interventions are most beneficial to students in the long term? Furthermore, how can we apply this evidence toward informing policy that could effectively reduce stark social, educational, and economic inequalities?
The first study of three assesses mechanisms by which the Fast Track project, a randomized intervention in the early 1990s for high-risk children in four communities (Durham, NC; Nashville, TN; rural PA; and Seattle, WA), reduced delinquency, arrests, and health and mental health service utilization in adolescence through young adulthood (ages 12-20). A decomposition of treatment effects indicates that about a third of Fast Track’s impact on later crime outcomes can be accounted for by improvements in social and self-regulation skills during childhood (ages 6-11), such as prosocial behavior, emotion regulation and problem solving. These skills proved less valuable for the prevention of mental and physical health problems.
The second study contributes new evidence on how non-instructional investments – such as increased spending on school social workers, guidance counselors, and health services – affect multiple aspects of student performance and well-being. Merging several administrative data sources spanning the 1996-2013 school years in North Carolina, I use an instrumental variables approach to estimate the extent to which local expenditure shifts affect students’ academic and behavioral outcomes. My findings indicate that exogenous increases in spending on non-instructional services not only reduce student absenteeism and disciplinary problems (important predictors of long-term outcomes) but also significantly raise student achievement, in similar magnitude to corresponding increases in instructional spending. Furthermore, subgroup analyses suggest that investments in student support personnel such as social workers, health services, and guidance counselors, in schools with concentrated low-income student populations could go a long way toward closing socioeconomic achievement gaps.
The third study examines individual pathways that lead to high school graduation or dropout. It employs a variety of machine learning techniques, including decision trees, random forests with bagging and boosting, and support vector machines, to predict student dropout using longitudinal administrative data from North Carolina. I consider a large set of predictor measures from grades three through eight including academic achievement, behavioral indicators, and background characteristics. My findings indicate that the most important predictors include eighth grade absences, math scores, and age-for-grade as well as early reading scores. Support vector classification (with a high cost parameter and low gamma parameter) predicts high school dropout with the highest overall validity in the testing dataset at 90.1 percent followed by decision trees with boosting and interaction terms at 89.5 percent.
Resumo:
We report 3 au resolution imaging observations of the protoplanetary disk aroundTW Hya at 145 and 233 GHz with the Atacama Large Millimeter/Submillimeter Array.Our observations revealed two deep gaps (~25-50 %) at 22 and 37 au and shallowergaps (a few %) at 6, 28, and 44 au, as recently reported by Andrews et al. (2016). Thecentral hole with a radius of 3 au was also marginally resolved. The most remarkablefinding is that the spectral index α(R) between bands 4 and 6 peaks at the 22 au gap.The derived power-law index of the dust opacity β(R) is ~1.7 at the 22 au gap anddecreases toward the disk center to ~0. The most prominent gap at 22 au could becaused by the gravitational interaction between the disk and an unseen planet with amass of ≲1.5 MNeptune although other origins may be possible. The planet-induced gap is supported by the fact that β(R) is enhanced at the 22 au gap, indicating a deficitof mm-sized grains within the gap due to dust filtration by a planet.
Resumo:
Accessibility concepts are increasingly acknowledged as fundamental to understand cities and urban regions. Accordingly, accessibility instruments have been recognised as valuable support tools for land-use and transport planning. However, despite the relatively large number of instruments available in the literature, they are not widely used in planning practice. This paper aims to explore why accessibility instruments are not widely used in planning practice. To this end, we focus our research on perceived user-friendliness and usefulness of accessibility instruments. First, we surveyed some instrument developers, providing an overview of the characteristics of accessibility instruments available and on developers’ perceptions of their user-friendliness in planning practice. Second, we brought together developers and planning practitioners in some local workshops across Europe and Australia, where participants were asked to use insights provided by accessibility instruments for the development of planning strategies. We found that most practitioners are convinced of the usefulness of accessibility instruments in planning practice, as they generate new and relevant insights for planners. Findings suggest that not only user-friendliness problems but mainly organisational barriers and lack of institutionalisation of accessibility instruments, are the main causes of the implementation gap. Thus user-friendliness improvement may provide limited contributions to the successful implementation of accessibility concepts in planning practice. In fact, there seems to be more to gain from the active and continued engagement of instrument developers with planning practitioners and the institutionalisation of accessibility planning.
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
The aim of this paper is to provide a comprehensive study of some linear non-local diffusion problems in metric measure spaces. These include, for example, open subsets in ℝN, graphs, manifolds, multi-structures and some fractal sets. For this, we study regularity, compactness, positivity and the spectrum of the stationary non-local operator. We then study the solutions of linear evolution non-local diffusion problems, with emphasis on similarities and differences with the standard heat equation in smooth domains. In particular, we prove weak and strong maximum principles and describe the asymptotic behaviour using spectral methods.
Resumo:
Let S(M) be the ring of (continuous) semialgebraic functions on a semialgebraic set M and S*(M) its subring of bounded semialgebraic functions. In this work we compute the size of the fibers of the spectral maps Spec(j)1:Spec(S(N))→Spec(S(M)) and Spec(j)2:Spec(S*(N))→Spec(S*(M)) induced by the inclusion j:N M of a semialgebraic subset N of M. The ring S(M) can be understood as the localization of S*(M) at the multiplicative subset WM of those bounded semialgebraic functions on M with empty zero set. This provides a natural inclusion iM:Spec(S(M)) Spec(S*(M)) that reduces both problems above to an analysis of the fibers of the spectral map Spec(j)2:Spec(S*(N))→Spec(S*(M)). If we denote Z:=ClSpec(S*(M))(M N), it holds that the restriction map Spec(j)2|:Spec(S*(N)) Spec(j)2-1(Z)→Spec(S*(M)) Z is a homeomorphism. Our problem concentrates on the computation of the size of the fibers of Spec(j)2 at the points of Z. The size of the fibers of prime ideals "close" to the complement Y:=M N provides valuable information concerning how N is immersed inside M. If N is dense in M, the map Spec(j)2 is surjective and the generic fiber of a prime ideal p∈Z contains infinitely many elements. However, finite fibers may also appear and we provide a criterium to decide when the fiber Spec(j)2-1(p) is a finite set for p∈Z. If such is the case, our procedure allows us to compute the size s of Spec(j)2-1(p). If in addition N is locally compact and M is pure dimensional, s coincides with the number of minimal prime ideals contained in p. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
Doctor of Philosophy in Mathematics
Resumo:
In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.
Resumo:
Spectral sensors are a wide class of devices that are extremely useful for detecting essential information of the environment and materials with high degree of selectivity. Recently, they have achieved high degrees of integration and low implementation cost to be suited for fast, small, and non-invasive monitoring systems. However, the useful information is hidden in spectra and it is difficult to decode. So, mathematical algorithms are needed to infer the value of the variables of interest from the acquired data. Between the different families of predictive modeling, Principal Component Analysis and the techniques stemmed from it can provide very good performances, as well as small computational and memory requirements. For these reasons, they allow the implementation of the prediction even in embedded and autonomous devices. In this thesis, I will present 4 practical applications of these algorithms to the prediction of different variables: moisture of soil, moisture of concrete, freshness of anchovies/sardines, and concentration of gasses. In all of these cases, the workflow will be the same. Initially, an acquisition campaign was performed to acquire both spectra and the variables of interest from samples. Then these data are used as input for the creation of the prediction models, to solve both classification and regression problems. From these models, an array of calibration coefficients is derived and used for the implementation of the prediction in an embedded system. The presented results will show that this workflow was successfully applied to very different scientific fields, obtaining autonomous and non-invasive devices able to predict the value of physical parameters of choice from new spectral acquisitions.