983 resultados para Non-binary arithmetic
Resumo:
This work has been partially supported by Grant No. DO 02-275, 16.12.2008, Bulgarian NSF, Ministry of Education and Science.
Resumo:
An approximate number is an ordered pair consisting of a (real) number and an error bound, briefly error, which is a (real) non-negative number. To compute with approximate numbers the arithmetic operations on errors should be well-known. To model computations with errors one should suitably define and study arithmetic operations and order relations over the set of non-negative numbers. In this work we discuss the algebraic properties of non-negative numbers starting from familiar properties of real numbers. We focus on certain operations of errors which seem not to have been sufficiently studied algebraically. In this work we restrict ourselves to arithmetic operations for errors related to addition and multiplication by scalars. We pay special attention to subtractability-like properties of errors and the induced “distance-like” operation. This operation is implicitly used under different names in several contemporary fields of applied mathematics (inner subtraction and inner addition in interval analysis, generalized Hukuhara difference in fuzzy set theory, etc.) Here we present some new results related to algebraic properties of this operation.
Resumo:
In this study, an Atomic Force Microscopy (AFM) roughness analysis was performed on non-commercial Nitinol alloys with Electropolished (EP) and Magneto-Electropolished (MEP) surface treatments and commercially available stents by measuring Root-Mean-Square (RMS) , Average Roughness (Ra), and Surface Area (SA) values at various dimensional areas on the alloy surfaces, ranging from (800 x 800 nm) to (115 x 115µm), and (800 x 800 nm) to (40 x 40 µm) on the commercial stents. Results showed that NiTi-Ta 10 wt% with an EP surface treatment yielded the highest overall roughness, while the NiTi-Cu 10 wt% alloy had the lowest roughness when analyzed over (115 x 115 µm). Scanning Electron Microscopy (SEM) and Energy Dispersive Spectroscopy (EDS) analysis revealed unique surface morphologies for surface treated alloys, as well as an aggregation of ternary elements Cr and Cu at grain boundaries in MEP and EP surface treated alloys, and non-surface treated alloys. Such surface micro-patterning on ternary Nitinol alloys could increase cellular adhesion and accelerate surface endothelialization of endovascular stents, thus reducing the likelihood of in-stent restenosis and provide insight into hemodynamic flow regimes and the corrosion behavior of an implantable device influenced from such surface micro-patterns.
Resumo:
Over the past five years, XML has been embraced by both the research and industrial community due to its promising prospects as a new data representation and exchange format on the Internet. The widespread popularity of XML creates an increasing need to store XML data in persistent storage systems and to enable sophisticated XML queries over the data. The currently available approaches to addressing the XML storage and retrieval issue have the limitations of either being not mature enough (e.g. native approaches) or causing inflexibility, a lot of fragmentation and excessive join operations (e.g. non-native approaches such as the relational database approach). ^ In this dissertation, I studied the issue of storing and retrieving XML data using the Semantic Binary Object-Oriented Database System (Sem-ODB) to leverage the advanced Sem-ODB technology with the emerging XML data model. First, a meta-schema based approach was implemented to address the data model mismatch issue that is inherent in the non-native approaches. The meta-schema based approach captures the meta-data of both Document Type Definitions (DTDs) and Sem-ODB Semantic Schemas, thus enables a dynamic and flexible mapping scheme. Second, a formal framework was presented to ensure precise and concise mappings. In this framework, both schemas and the conversions between them are formally defined and described. Third, after major features of an XML query language, XQuery, were analyzed, a high-level XQuery to Semantic SQL (Sem-SQL) query translation scheme was described. This translation scheme takes advantage of the navigation-oriented query paradigm of the Sem-SQL, thus avoids the excessive join problem of relational approaches. Finally, the modeling capability of the Semantic Binary Object-Oriented Data Model (Sem-ODM) was explored from the perspective of conceptually modeling an XML Schema using a Semantic Schema. ^ It was revealed that the advanced features of the Sem-ODB, such as multi-valued attributes, surrogates, the navigation-oriented query paradigm, among others, are indeed beneficial in coping with the XML storage and retrieval issue using a non-XML approach. Furthermore, extensions to the Sem-ODB to make it work more effectively with XML data were also proposed. ^
Resumo:
Object-oriented design and object-oriented languages support the development of independent software components such as class libraries. When using such components, versioning becomes a key issue. While various ad-hoc techniques and coding idioms have been used to provide versioning, all of these techniques have deficiencies - ambiguity, the necessity of recompilation or re-coding, or the loss of binary compatibility of programs. Components from different software vendors are versioned at different times. Maintaining compatibility between versions must be consciously engineered. New technologies such as distributed objects further complicate libraries by requiring multiple implementations of a type simultaneously in a program. This paper describes a new C++ object model called the Shared Object Model for C++ users and a new implementation model called the Object Binary Interface for C++ implementors. These techniques provide a mechanism for allowing multiple implementations of an object in a program. Early analysis of this approach has shown it to have performance broadly comparable to conventional implementations.
Resumo:
Currently there is no consensus as to the specific cognitive impairments that characterize mathematical disabilities (MD) or specific subtypes such as an arithmetic disability (AD). The present study sought to address this concern by examining cognitive processes that might undergird AD in children. The present study utilized archival data to conduct two investigations. The first investigation examined the executive functioning and working memory of children with AD. An age-matched achievement-matched design was employed to explore whether children with AD exhibit developmental lags or deficits in these cognitive domains. While children with AD did not exhibit impairments in verbal working memory or colour word inhibition, they did demonstrate impairments in shifting attention, visual-spatial working memory, and quantity inhibition. As children with AD did not perform more poorly than their younger achievement-matched peers on any of these tasks, impairments in specific areas of executive functioning and working memory appeared to reflect a developmental lag rather than a cognitive deficit. The second study examined the phonological processing performance of children with AD compared to children with comorbid disabilities in arithmetic and word recognition (AD/WRD) and to typically achieving (TA) children. Results indicated that, while children with AD did demonstrate impairments on all isolated naming speed tasks, trail making digits, and memory for digits, they did not demonstrate impairments on measures of phonological awareness, nonword repetition, serial processing speed, or serial naming speed. In contrast, children with AD/WRD demonstrated impairments on measures of phonological awareness, phonological short-term memory, isolated naming speed, serial processing speed, and the alphabet a-z task. Overall, results suggested that phonological processing impairments are more prominent in children with a WRD than children with an AD. Together, these studies further our understanding of the nature of the cognitive processes that underlie AD by focusing upon rarely used methods (i.e., age-matched achievement-matched design) and under-examined cognitive domains (i.e., phonological processing).
Resumo:
Non-parametric multivariate analyses of complex ecological datasets are widely used. Following appropriate pre-treatment of the data inter-sample resemblances are calculated using appropriate measures. Ordination and clustering derived from these resemblances are used to visualise relationships among samples (or variables). Hierarchical agglomerative clustering with group-average (UPGMA) linkage is often the clustering method chosen. Using an example dataset of zooplankton densities from the Bristol Channel and Severn Estuary, UK, a range of existing and new clustering methods are applied and the results compared. Although the examples focus on analysis of samples, the methods may also be applied to species analysis. Dendrograms derived by hierarchical clustering are compared using cophenetic correlations, which are also used to determine optimum in flexible beta clustering. A plot of cophenetic correlation against original dissimilarities reveals that a tree may be a poor representation of the full multivariate information. UNCTREE is an unconstrained binary divisive clustering algorithm in which values of the ANOSIM R statistic are used to determine (binary) splits in the data, to form a dendrogram. A form of flat clustering, k-R clustering, uses a combination of ANOSIM R and Similarity Profiles (SIMPROF) analyses to determine the optimum value of k, the number of groups into which samples should be clustered, and the sample membership of the groups. Robust outcomes from the application of such a range of differing techniques to the same resemblance matrix, as here, result in greater confidence in the validity of a clustering approach.
Resumo:
Non-parametric multivariate analyses of complex ecological datasets are widely used. Following appropriate pre-treatment of the data inter-sample resemblances are calculated using appropriate measures. Ordination and clustering derived from these resemblances are used to visualise relationships among samples (or variables). Hierarchical agglomerative clustering with group-average (UPGMA) linkage is often the clustering method chosen. Using an example dataset of zooplankton densities from the Bristol Channel and Severn Estuary, UK, a range of existing and new clustering methods are applied and the results compared. Although the examples focus on analysis of samples, the methods may also be applied to species analysis. Dendrograms derived by hierarchical clustering are compared using cophenetic correlations, which are also used to determine optimum in flexible beta clustering. A plot of cophenetic correlation against original dissimilarities reveals that a tree may be a poor representation of the full multivariate information. UNCTREE is an unconstrained binary divisive clustering algorithm in which values of the ANOSIM R statistic are used to determine (binary) splits in the data, to form a dendrogram. A form of flat clustering, k-R clustering, uses a combination of ANOSIM R and Similarity Profiles (SIMPROF) analyses to determine the optimum value of k, the number of groups into which samples should be clustered, and the sample membership of the groups. Robust outcomes from the application of such a range of differing techniques to the same resemblance matrix, as here, result in greater confidence in the validity of a clustering approach.
Resumo:
Organizations and individuals dealing with non-commercial initiatives are in permanent search for funding. Crowdfunding is an alternative way of collecting funds from general public through Internet-based platforms, which is currently gaining popularity all over the world. There are several research initiatives in that field that show the influence of different factors on the success of campaigns, both with commercial and non-commercial objectives. Non-profit nature of the project is named among key predictors of positive outcome. In this context, the purpose of this work is to check whether the tendencies detected by scholars are valid for non-commercial initiatives, especially those having socially aware objectives, posted on the Belarusian crowdfunding platform Ulej. The method used for validation of the research hypotheses is binary logistic regression and statistical test. The results showed that the dependent variable success is influenced by such independent variables as the funding goal, the sum collected, the number of sponsors and the average pledge. On the other hand, the effect of the duration period is not significant. Inferential analysis shows that there is no difference in the level of success between commercial and non-commercial projects and that social orientation does not increase the likelihood of meeting financial goals. The findings are opposite to those provided in literature. However that could be explained by the short period of functioning of platform and the small number of projects.
Resumo:
Hospital acquired infections (HAI) are costly but many are avoidable. Evaluating prevention programmes requires data on their costs and benefits. Estimating the actual costs of HAI (a measure of the cost savings due to prevention) is difficult as HAI changes cost by extending patient length of stay, yet, length of stay is a major risk factor for HAI. This endogeneity bias can confound attempts to measure accurately the cost of HAI. We propose a two-stage instrumental variables estimation strategy that explicitly controls for the endogeneity between risk of HAI and length of stay. We find that a 10% reduction in ex ante risk of HAI results in an expected savings of £693 ($US 984).