388 resultados para Lipschitz Mappings
Resumo:
2000 Mathematics Subject Classification: 54C60, 54C65, 54D20, 54D30.
Resumo:
2000 Mathematics Subject Classification: 49J52, 49J50, 58C20, 26B09.
Resumo:
2000 Mathematics Subject Classification: 54C35, 54D20, 54C60.
Resumo:
Background: Synthetic phonics is the widely accepted approach for teaching reading in English: children are taught to sound out the letters in a word then blend these sounds together. Aims: We compared the impact of two synthetic phonics programmes on early reading.SampleChildren received Letters and Sounds (L&S; 7 schools) which teaches multiple letter-sound mappings or Early Reading Research (ERR; 10 schools) which teaches only the most consistent mappings plus frequent words by sight.MethodWe measured phonological awareness (PA) and reading from school entry to the end of the second (all schools) or third school year (4 ERR, 3 L&S schools). Results: PA was significantly related to all reading measures for the whole sample. However, there was a closer relationship between PA and exception word reading for children receiving the L&S programme. The programmes were equally effective overall, but their impact on reading significantly interacted with school-entry PA: children with poor PA at school entry achieved higher reading attainments under ERR (significant group difference on exception word reading at the end of the first year), whereas children with good PA performed equally well under either programme. Conclusions: The more intensive phonics programme (L&S) heightened the association between PA and exception word reading. Although the programmes were equally effective for most children, results indicate potential benefits of ERR for children with poor PA. We suggest that phonics programmes could be simplified to teach only the most consistent mappings plus frequent words by sight.
Resumo:
The focus of this thesis is the extension of topographic visualisation mappings to allow for the incorporation of uncertainty. Few visualisation algorithms in the literature are capable of mapping uncertain data with fewer able to represent observation uncertainties in visualisations. As such, modifications are made to NeuroScale, Locally Linear Embedding, Isomap and Laplacian Eigenmaps to incorporate uncertainty in the observation and visualisation spaces. The proposed mappings are then called Normally-distributed NeuroScale (N-NS), T-distributed NeuroScale (T-NS), Probabilistic LLE (PLLE), Probabilistic Isomap (PIso) and Probabilistic Weighted Neighbourhood Mapping (PWNM). These algorithms generate a probabilistic visualisation space with each latent visualised point transformed to a multivariate Gaussian or T-distribution, using a feed-forward RBF network. Two types of uncertainty are then characterised dependent on the data and mapping procedure. Data dependent uncertainty is the inherent observation uncertainty. Whereas, mapping uncertainty is defined by the Fisher Information of a visualised distribution. This indicates how well the data has been interpolated, offering a level of ‘surprise’ for each observation. These new probabilistic mappings are tested on three datasets of vectorial observations and three datasets of real world time series observations for anomaly detection. In order to visualise the time series data, a method for analysing observed signals and noise distributions, Residual Modelling, is introduced. The performance of the new algorithms on the tested datasets is compared qualitatively with the latent space generated by the Gaussian Process Latent Variable Model (GPLVM). A quantitative comparison using existing evaluation measures from the literature allows performance of each mapping function to be compared. Finally, the mapping uncertainty measure is combined with NeuroScale to build a deep learning classifier, the Cascading RBF. This new structure is tested on the MNist dataset achieving world record performance whilst avoiding the flaws seen in other Deep Learning Machines.
Resumo:
The existence of viable solutions is proven for nonautonomous upper semicontinuous differential inclusions whose right-hand side is contained in the Clarke subdifferential of a locally Lipschitz continuous function.
Resumo:
A brief introduction into the theory of differential inclusions, viability theory and selections of set valued mappings is presented. As an application the implicit scheme of the Leontief dynamic input-output model is considered.
Resumo:
Over the past five years, XML has been embraced by both the research and industrial community due to its promising prospects as a new data representation and exchange format on the Internet. The widespread popularity of XML creates an increasing need to store XML data in persistent storage systems and to enable sophisticated XML queries over the data. The currently available approaches to addressing the XML storage and retrieval issue have the limitations of either being not mature enough (e.g. native approaches) or causing inflexibility, a lot of fragmentation and excessive join operations (e.g. non-native approaches such as the relational database approach). ^ In this dissertation, I studied the issue of storing and retrieving XML data using the Semantic Binary Object-Oriented Database System (Sem-ODB) to leverage the advanced Sem-ODB technology with the emerging XML data model. First, a meta-schema based approach was implemented to address the data model mismatch issue that is inherent in the non-native approaches. The meta-schema based approach captures the meta-data of both Document Type Definitions (DTDs) and Sem-ODB Semantic Schemas, thus enables a dynamic and flexible mapping scheme. Second, a formal framework was presented to ensure precise and concise mappings. In this framework, both schemas and the conversions between them are formally defined and described. Third, after major features of an XML query language, XQuery, were analyzed, a high-level XQuery to Semantic SQL (Sem-SQL) query translation scheme was described. This translation scheme takes advantage of the navigation-oriented query paradigm of the Sem-SQL, thus avoids the excessive join problem of relational approaches. Finally, the modeling capability of the Semantic Binary Object-Oriented Data Model (Sem-ODM) was explored from the perspective of conceptually modeling an XML Schema using a Semantic Schema. ^ It was revealed that the advanced features of the Sem-ODB, such as multi-valued attributes, surrogates, the navigation-oriented query paradigm, among others, are indeed beneficial in coping with the XML storage and retrieval issue using a non-XML approach. Furthermore, extensions to the Sem-ODB to make it work more effectively with XML data were also proposed. ^
Resumo:
The coastline from Rio Grande do Norte state is characterized for the presence of dunes and cliffs. The latter consist of slopes with height up to 40 meters and inclinations ranging from 30° to 90° wich horizontal. Thus, this dissertation had as objective the evaluation of the stability of cliff from Ponta do Pirambu in Tibau do Sul/RN, and the realization of a parametric study on the stability of a homogeneous cliff considering as variables the material's cohesion, the cliff height and the slope inclination. The study in Ponta do Pirambu considered yet the possibility of the existence of a colluvial cover with thickness ranging from 0.50 to 5.00 meters. The analyzes were performed by Bishop method, using GEO5 software. In parametric analysis were produced graphics that relate height cliff with the inclination, to safety factors equals to 1.00 and 1.50; besides graphics where it is possible easily get the lowest safety factor as from the cohesion, cliff height and its inclination. It was concluded that these graphs are very useful to preliminary analyzes, for the definition of critical areas in risk mappings in areas of cliffs and for determination of an equation for obtaining the lowest safety factor function of the strength parameters and of slope geometry. Regarding the cliff from Ponta do Pirambu, the results showed that the cliff is subject to superficial landslides located in the points where may there be the presence of colluvium with thicknesses greater than two meters. However, the cliff remains stable presenting the global safety factor equal or superior to 2.50 in the saturated condition.
Resumo:
Acknowledgements The authors would like to thank Fiona Carr, Carmen Horne, and Brigitta Toth for assistance with data collection. Disclosure statement No potential conflict of interest was reported by the authors. Funding information The authors would like to thank the School of Psychology, University of Aberdeen, for contributing funding for participant payments.
Resumo:
This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification.
In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information.
In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data.
Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear.
We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale vocalization data set. The word error rate of the DCTNet feature is similar to the MFSC in speech recognition tasks, suggesting that the convolutional network is able to reveal acoustic content of speech signals.
Resumo:
We propose a novel method to harmonize diffusion MRI data acquired from multiple sites and scanners, which is imperative for joint analysis of the data to significantly increase sample size and statistical power of neuroimaging studies. Our method incorporates the following main novelties: i) we take into account the scanner-dependent spatial variability of the diffusion signal in different parts of the brain; ii) our method is independent of compartmental modeling of diffusion (e.g., tensor, and intra/extra cellular compartments) and the acquired signal itself is corrected for scanner related differences; and iii) inter-subject variability as measured by the coefficient of variation is maintained at each site. We represent the signal in a basis of spherical harmonics and compute several rotation invariant spherical harmonic features to estimate a region and tissue specific linear mapping between the signal from different sites (and scanners). We validate our method on diffusion data acquired from seven different sites (including two GE, three Philips, and two Siemens scanners) on a group of age-matched healthy subjects. Since the extracted rotation invariant spherical harmonic features depend on the accuracy of the brain parcellation provided by Freesurfer, we propose a feature based refinement of the original parcellation such that it better characterizes the anatomy and provides robust linear mappings to harmonize the dMRI data. We demonstrate the efficacy of our method by statistically comparing diffusion measures such as fractional anisotropy, mean diffusivity and generalized fractional anisotropy across multiple sites before and after data harmonization. We also show results using tract-based spatial statistics before and after harmonization for independent validation of the proposed methodology. Our experimental results demonstrate that, for nearly identical acquisition protocol across sites, scanner-specific differences can be accurately removed using the proposed method.
Resumo:
Many dynamical processes are subject to abrupt changes in state. Often these perturbations can be periodic and of short duration relative to the evolving process. These types of phenomena are described well by what are referred to as impulsive differential equations, systems of differential equations coupled with discrete mappings in state space. In this thesis we employ impulsive differential equations to model disease transmission within an industrial livestock barn. In particular we focus on the poultry industry and a viral disease of poultry called Marek's disease. This system lends itself well to impulsive differential equations. Entire cohorts of poultry are introduced and removed from a barn concurrently. Additionally, Marek's disease is transmitted indirectly and the viral particles can survive outside the host for weeks. Therefore, depopulating, cleaning, and restocking of the barn are integral factors in modelling disease transmission and can be completely captured by the impulsive component of the model. Our model allows us to investigate how modern broiler farm practices can make disease elimination difficult or impossible to achieve. It also enables us to investigate factors that may contribute to virulence evolution. Our model suggests that by decrease the cohort duration or by decreasing the flock density, Marek's disease can be eliminated from a barn with no increase in cleaning effort. Unfortunately our model also suggests that these practices will lead to disease evolution towards greater virulence. Additionally, our model suggests that if intensive cleaning between cohorts does not rid the barn of disease, it may drive evolution and cause the disease to become more virulent.
Resumo:
Default invariance is the idea that default does not change at any scale of law and finance. Default is a conserved quantity in a universe where fundamental principles of law and finance operate. It exists at the micro-level as part of the fundamental structure of every financial transaction, and at the macro- level, as a fixed critical point within the relatively stable phases of the law and finance cycle. A key point is that default is equivalent to maximizing uncertainty at the micro-level and at the macro-level, is equivalent to the phase transition where unbearable fluctuations occur in all forms of risk transformation, including maturity, liquidity and credit. As such, default invariance is the glue that links the micro and macro structures of law and finance. In this essay, we apply naïve category theory (NCT), a type of mapping logic, to these types of phenomena. The purpose of using NCT is to introduce a rigorous (but simple) mathematical methodology to law and finance discourse and to show that these types of structural considerations are of prime practical importance and significance to law and finance practitioners. These mappings imply a number of novel areas of investigation. From the micro- structure, three macro-approximations are implied. These approximations form the core analytical framework which we will use to examine the phenomena and hypothesize rules governing law and finance. Our observations from these approximations are grouped into five findings. While the entirety of the five findings can be encapsulated by the three approximations, since the intended audience of this paper is the non-specialist in law, finance and category theory, for ease of access we will illustrate the use of the mappings with relatively common concepts drawn from law and finance, focusing especially on financial contracts, derivatives, Shadow Banking, credit rating agencies and credit crises.
Resumo:
Let A be a unital dense algebra of linear mappings on a complex vector space X. Let φ = Σn i=1 Mai,bi be a locally quasi-nilpotent elementary operator of length n on A. We show that, if {a1, . . . , an} is locally linearly independent, then the local dimension of V (φ) = span{biaj : 1 ≤ i, j ≤ n} is at most n(n−1) 2 . If ldim V (φ) = n(n−1) 2 , then there exists a representation of φ as φ = Σn i=1 Mui,vi with viuj = 0 for i ≥ j. Moreover, we give a complete characterization of locally quasinilpotent elementary operators of length 3.