358 resultados para MAPPINGS
Resumo:
There are applied power mappings in algebras with logarithms induced by a given linear operator D in order to study particular properties of powers of logarithms. Main results of this paper will be concerned with the case when an algebra under consideration is commutative and has a unit and the operator D satisfies the Leibniz condition, i.e. D(xy) = xDy + yDx for x, y ∈ dom D. Note that in the Number Theory there are well-known several formulae expressed by means of some combinations of powers of logarithmic and antilogarithmic mappings or powers of logarithms and antilogarithms (cf. for instance, the survey of Schinzel S[1].
Resumo:
2000 Mathematics Subject Classification: Primary 26A33, 30C45; Secondary 33A35
Resumo:
Mathematics Subject Classification: 26A33, 74B20, 74D10, 74L15
Resumo:
2000 Mathematics Subject Classification: 54C55, 54H25, 55M20.
Resumo:
The thesis investigates transport properties in high temperature plasmas with symplectic mappings. A formalism is developed to derive such maps from symplectic integrators. Concrete maps are given and analyzed.
Resumo:
2000 Mathematics Subject Classification: 60G70, 60F05.
Resumo:
MSC 2010: 34A08, 34A37, 49N70
Resumo:
2000 Mathematics Subject Classification: 54C60, 54C65, 54D20, 54D30.
Resumo:
2000 Mathematics Subject Classification: 54C35, 54D20, 54C60.
Resumo:
Background: Synthetic phonics is the widely accepted approach for teaching reading in English: children are taught to sound out the letters in a word then blend these sounds together. Aims: We compared the impact of two synthetic phonics programmes on early reading.SampleChildren received Letters and Sounds (L&S; 7 schools) which teaches multiple letter-sound mappings or Early Reading Research (ERR; 10 schools) which teaches only the most consistent mappings plus frequent words by sight.MethodWe measured phonological awareness (PA) and reading from school entry to the end of the second (all schools) or third school year (4 ERR, 3 L&S schools). Results: PA was significantly related to all reading measures for the whole sample. However, there was a closer relationship between PA and exception word reading for children receiving the L&S programme. The programmes were equally effective overall, but their impact on reading significantly interacted with school-entry PA: children with poor PA at school entry achieved higher reading attainments under ERR (significant group difference on exception word reading at the end of the first year), whereas children with good PA performed equally well under either programme. Conclusions: The more intensive phonics programme (L&S) heightened the association between PA and exception word reading. Although the programmes were equally effective for most children, results indicate potential benefits of ERR for children with poor PA. We suggest that phonics programmes could be simplified to teach only the most consistent mappings plus frequent words by sight.
Resumo:
The focus of this thesis is the extension of topographic visualisation mappings to allow for the incorporation of uncertainty. Few visualisation algorithms in the literature are capable of mapping uncertain data with fewer able to represent observation uncertainties in visualisations. As such, modifications are made to NeuroScale, Locally Linear Embedding, Isomap and Laplacian Eigenmaps to incorporate uncertainty in the observation and visualisation spaces. The proposed mappings are then called Normally-distributed NeuroScale (N-NS), T-distributed NeuroScale (T-NS), Probabilistic LLE (PLLE), Probabilistic Isomap (PIso) and Probabilistic Weighted Neighbourhood Mapping (PWNM). These algorithms generate a probabilistic visualisation space with each latent visualised point transformed to a multivariate Gaussian or T-distribution, using a feed-forward RBF network. Two types of uncertainty are then characterised dependent on the data and mapping procedure. Data dependent uncertainty is the inherent observation uncertainty. Whereas, mapping uncertainty is defined by the Fisher Information of a visualised distribution. This indicates how well the data has been interpolated, offering a level of ‘surprise’ for each observation. These new probabilistic mappings are tested on three datasets of vectorial observations and three datasets of real world time series observations for anomaly detection. In order to visualise the time series data, a method for analysing observed signals and noise distributions, Residual Modelling, is introduced. The performance of the new algorithms on the tested datasets is compared qualitatively with the latent space generated by the Gaussian Process Latent Variable Model (GPLVM). A quantitative comparison using existing evaluation measures from the literature allows performance of each mapping function to be compared. Finally, the mapping uncertainty measure is combined with NeuroScale to build a deep learning classifier, the Cascading RBF. This new structure is tested on the MNist dataset achieving world record performance whilst avoiding the flaws seen in other Deep Learning Machines.
Resumo:
A brief introduction into the theory of differential inclusions, viability theory and selections of set valued mappings is presented. As an application the implicit scheme of the Leontief dynamic input-output model is considered.
Resumo:
Over the past five years, XML has been embraced by both the research and industrial community due to its promising prospects as a new data representation and exchange format on the Internet. The widespread popularity of XML creates an increasing need to store XML data in persistent storage systems and to enable sophisticated XML queries over the data. The currently available approaches to addressing the XML storage and retrieval issue have the limitations of either being not mature enough (e.g. native approaches) or causing inflexibility, a lot of fragmentation and excessive join operations (e.g. non-native approaches such as the relational database approach). ^ In this dissertation, I studied the issue of storing and retrieving XML data using the Semantic Binary Object-Oriented Database System (Sem-ODB) to leverage the advanced Sem-ODB technology with the emerging XML data model. First, a meta-schema based approach was implemented to address the data model mismatch issue that is inherent in the non-native approaches. The meta-schema based approach captures the meta-data of both Document Type Definitions (DTDs) and Sem-ODB Semantic Schemas, thus enables a dynamic and flexible mapping scheme. Second, a formal framework was presented to ensure precise and concise mappings. In this framework, both schemas and the conversions between them are formally defined and described. Third, after major features of an XML query language, XQuery, were analyzed, a high-level XQuery to Semantic SQL (Sem-SQL) query translation scheme was described. This translation scheme takes advantage of the navigation-oriented query paradigm of the Sem-SQL, thus avoids the excessive join problem of relational approaches. Finally, the modeling capability of the Semantic Binary Object-Oriented Data Model (Sem-ODM) was explored from the perspective of conceptually modeling an XML Schema using a Semantic Schema. ^ It was revealed that the advanced features of the Sem-ODB, such as multi-valued attributes, surrogates, the navigation-oriented query paradigm, among others, are indeed beneficial in coping with the XML storage and retrieval issue using a non-XML approach. Furthermore, extensions to the Sem-ODB to make it work more effectively with XML data were also proposed. ^
Resumo:
The coastline from Rio Grande do Norte state is characterized for the presence of dunes and cliffs. The latter consist of slopes with height up to 40 meters and inclinations ranging from 30° to 90° wich horizontal. Thus, this dissertation had as objective the evaluation of the stability of cliff from Ponta do Pirambu in Tibau do Sul/RN, and the realization of a parametric study on the stability of a homogeneous cliff considering as variables the material's cohesion, the cliff height and the slope inclination. The study in Ponta do Pirambu considered yet the possibility of the existence of a colluvial cover with thickness ranging from 0.50 to 5.00 meters. The analyzes were performed by Bishop method, using GEO5 software. In parametric analysis were produced graphics that relate height cliff with the inclination, to safety factors equals to 1.00 and 1.50; besides graphics where it is possible easily get the lowest safety factor as from the cohesion, cliff height and its inclination. It was concluded that these graphs are very useful to preliminary analyzes, for the definition of critical areas in risk mappings in areas of cliffs and for determination of an equation for obtaining the lowest safety factor function of the strength parameters and of slope geometry. Regarding the cliff from Ponta do Pirambu, the results showed that the cliff is subject to superficial landslides located in the points where may there be the presence of colluvium with thicknesses greater than two meters. However, the cliff remains stable presenting the global safety factor equal or superior to 2.50 in the saturated condition.
Resumo:
Acknowledgements The authors would like to thank Fiona Carr, Carmen Horne, and Brigitta Toth for assistance with data collection. Disclosure statement No potential conflict of interest was reported by the authors. Funding information The authors would like to thank the School of Psychology, University of Aberdeen, for contributing funding for participant payments.