976 resultados para ALEPH Order Number
Resumo:
As one of the longest running franchises in cinema history, and with its well-established use of product placements, the James Bond film series provides an ideal framework within which to measure and catalogue the number and types of products used within a particular timeframe. This case study will draw upon extensive content analysis of the James Bond film series in order to chart the evolution of product placement across the franchise's 50 year history.
Resumo:
In many modeling situations in which parameter values can only be estimated or are subject to noise, the appropriate mathematical representation is a stochastic ordinary differential equation (SODE). However, unlike the deterministic case in which there are suites of sophisticated numerical methods, numerical methods for SODEs are much less sophisticated. Until a recent paper by K. Burrage and P.M. Burrage (1996), the highest strong order of a stochastic Runge-Kutta method was one. But K. Burrage and P.M. Burrage (1996) showed that by including additional random variable terms representing approximations to the higher order Stratonovich (or Ito) integrals, higher order methods could be constructed. However, this analysis applied only to the one Wiener process case. In this paper, it will be shown that in the multiple Wiener process case all known stochastic Runge-Kutta methods can suffer a severe order reduction if there is non-commutativity between the functions associated with the Wiener processes. Importantly, however, it is also suggested how this order can be repaired if certain commutator operators are included in the Runge-Kutta formulation. (C) 1998 Elsevier Science B.V. and IMACS. All rights reserved.
Resumo:
In Burrage and Burrage [1] it was shown that by introducing a very general formulation for stochastic Runge-Kutta methods, the previous strong order barrier of order one could be broken without having to use higher derivative terms. In particular, methods of strong order 1.5 were developed in which a Stratonovich integral of order one and one of order two were present in the formulation. In this present paper, general order results are proven about the maximum attainable strong order of these stochastic Runge-Kutta methods (SRKs) in terms of the order of the Stratonovich integrals appearing in the Runge-Kutta formulation. In particular, it will be shown that if an s-stage SRK contains Stratonovich integrals up to order p then the strong order of the SRK cannot exceed min{(p + 1)/2, (s - 1)/2), p greater than or equal to 2, s greater than or equal to 3 or 1 if p = 1.
Resumo:
It is certain that there will be changes in environmental conditions across the globe as a result of climate change. Such changes will require the building of biological, human and infrastructure resilience. In some instances the building of such resilience will be insufficient to deal with extreme changes in environmental conditions and legal frameworks will be required to provide recognition and support for people dislocated because of environmental change. Such dislocation may occur internally within the country of original origin or externally into another State’s territory. International and national legal frameworks do not currently recognise or assist people displaced as a result of environmental factors including displacement occurring as a result of climate change. Legal frameworks developed to deal with this issue will need to consider the legal rights of those people displaced and the legal responsibilities of those countries required to respond to such displacement. The objective of this article is to identify the most suitable international institution to host a program addressing climate displacement. There are a number of areas of international law that are relevant to climate displacement, including refugee law, human rights law and international environmental law. These regimes, however, were not designed to protect people relocating as a result of environmental change. As such, while they indirectly may be of relevance to climate displacement, they currently do nothing to directly address this complex issue. In order to determine the most appropriate institution to address and regulate climate displacement, it is imperative to consider issues of governance. This paper seeks to examine this issue and determine whether it is preferable to place climate displacement programs into existing international legal frameworks or whether it is necessary to regulate this area in an entirely new institution specifically designed to deal with the complex and cross-cutting issues surrounding the topic. Commentators in this area have proposed three different regulatory models for addressing climate displacement. These models include: (a) Expand the definition of refugee under the Refugee Convention to encompass persons displaced by climate change; (b) Implement a new stand alone Climate Displacement Convention; and (c) Implement a Climate Displacement Protocol to the UNFCCC. This article will examine each of these proposed models against a number of criteria to determine the model that is most likely to address the needs and requirements of people displaced by climate change. It will also identify the model that is likely to be most politically acceptable and realistic for those countries likely to attract responsibilities by its implementation. In order to assess whether the rights and needs of the people to be displaced are to be met, theories of procedural, distributive and remedial justice will be used to consider the equity of the proposed schemes. In order to consider the most politically palatable and realistic scheme, reference will be made to previous state practice and compliance with existing obligations in the area. It is suggested that the criteria identified by this article should underpin any future climate displacement instrument.
Resumo:
Multilevel converters, because of the benefits they attract in generating high quality output voltage, are used in several applications. Various modulation and control techniques are introduced by several researchers to control the output voltage of the multilevel converters like space vector modulation and harmonic elimination (HE) methods. Multilevel converters may have a DC link with equal or unequal DC voltages. In this study a new HE technique based on the HE method is proposed for multilevel converters with unequal DC link voltage. The DC link voltage levels are considered as additional variables for the HE method and the voltage levels are defined based on the HE results. Increasing the number of voltage levels can reduce lower order harmonic content because of the fact that more variables are created. In comparison to previous methods, this new technique has a positive effect on the output voltage quality by reducing its total harmonic distortion, which must take into consideration for some applications such as uninterruptable power supply, motor drive systems and piezoelectric transducer excitation. In order to verify the proposed modulation technique, MATLAB simulations and experimental tests are carried out for a single-phase four-level diode-clamped converter.
Creating 'saviour siblings' : the notion of harming by conceiving in the context of healthy children
Resumo:
Over the past decade there have been a number of families who have utilised assisted reproductive technologies (ARTs) to create a tissue-matched child, with the purpose of using the child’s tissue to cure an existing sick child. This inevitably brings such families a sense of hope as the ultimate aim is to overcome a family health crisis. However, this specific use of reproductive technologies has been the subject of significant criticism, most of which is levelled against the potential harm to the ‘saviour’ child. In Australia, families seeking to access reproductive technologies in this context are therefore required to justify their motives to an ethics committee in order to establish, amongst other things, whether the child will suffer harm once born. This paper explores the concept of harm in the context of conception, focusing on whether it is possible to ‘harm’ a healthy child who has been conceived to save another. To achieve this, the paper will evaluate the impact of the ‘non-identity’ principle in the ‘saviour sibling’ context, and assess the existing body of literature which addresses ‘harm’ in the context of conception. As will be established, the majority of such literature has focused on ‘wrongful life’ cases which seek to address whether an existing child who has been born with a disability, has been harmed. Finally, this paper will distinguish the harm arguments in the ‘saviour sibling’ context based on the fact that the harm evaluation concerns the ‘future-life’ assessment of a healthy child.
Resumo:
Self-hypnosis was taught to 87 obstetric patients (HYP) and was not taught to 56 other patients (CNTRL), all delivered by the same family physician, in order to determine whether the use of self-hypnosis by low-risk obstetric patients leads to fewer technologic interventions during their deliveries or greater satisfaction of parturients with their delivery experience or both. The outcomes of the deliveries of these two groups were compared, and the HYP group was compared to 352 low-risk patients delivered by other family physicians at the same hospital (WCH). Questionnaires were mailed postpartum to 156 patients, all delivered by the same family physician, to determine satisfaction with delivery using the Labor and Delivery Satisfaction Index (LADSI). The hypnosis group showed a significant reduction in the number of epidurals (11.4% less than CNTRL and 17.9% less than WCH, p < 0.05) and the use of intravenous lines (18.5% less for both, p < 0.05). The number of episiotomies was significantly less in the HYP group compared to WCH (15.9%, p < 0.05) and 11.5% less when compared to CNTRL. The tear rate was not statistically different. Combined use of the intervention triad (epidural–forceps–episiotomy) was less for HYP than for CNTRL (15.8% less) and WCH (10.2% less, p < 0.05). More deliveries were done in the labor room with HYP than CNTRL (21%, p < 0.05). The second stage was shortened by 10 min (HYP vs CNTRL). Overall satisfaction of HYP and CNTRL patients was similar and generally favorable.
Resumo:
None of currently used tonometers produce estimated IOP values that are free of errors. Measurement incredibility arises from indirect measurement of corneal deformation and the fact that pressure calculations are based on population averaged parameters of anterior segment. Reliable IOP values are crucial for understanding and monitoring of number of eye pathologies e.g. glaucoma. We have combined high speed swept source OCT with air-puff chamber. System provides direct measurement of deformation of cornea and anterior surface of the lens. This paper describes in details the performance of air-puff ssOCT instrument. We present different approaches of data presentation and analysis. Changes in deformation amplitude appears to be good indicator of IOP changes. However, it seems that in order to provide accurate intraocular pressure values an additional information on corneal biomechanics is necessary. We believe that such information could be extracted from data provided by air-puff ssOCT.
Resumo:
Product rating systems are very popular on the web, and users are increasingly depending on the overall product ratings provided by websites to make purchase decisions or to compare various products. Currently most of these systems directly depend on users’ ratings and aggregate the ratings using simple aggregating methods such as mean or median [1]. In fact, many websites also allow users to express their opinions in the form of textual product reviews. In this paper, we propose a new product reputation model that uses opinion mining techniques in order to extract sentiments about product’s features, and then provide a method to generate a more realistic reputation value for every feature of the product and the product itself. We considered the strength of the opinion rather than its orientation only. We do not treat all product features equally when we calculate the overall product reputation, as some features are more important to customers than others, and consequently have more impact on customers buying decisions. Our method provides helpful details about the product features for customers rather than only representing reputation as a number only.
Resumo:
A newly developed computational approach is proposed in the paper for the analysis of multiple crack problems based on the eigen crack opening displacement (COD) boundary integral equations. The eigen COD particularly refers to a crack in an infinite domain under fictitious traction acting on the crack surface. With the concept of eigen COD, the multiple cracks in great number can be solved by using the conventional displacement discontinuity boundary integral equations in an iterative fashion with a small size of system matrix to determine all the unknown CODs step by step. To deal with the interactions among cracks for multiple crack problems, all cracks in the problem are divided into two groups, namely the adjacent group and the far-field group, according to the distance to the current crack in consideration. The adjacent group contains cracks with relatively small distances but strong effects to the current crack, while the others, the cracks of far-field group are composed of those with relatively large distances. Correspondingly, the eigen COD of the current crack is computed in two parts. The first part is computed by using the fictitious tractions of adjacent cracks via the local Eshelby matrix derived from the traction boundary integral equations in discretized form, while the second part is computed by using those of far-field cracks so that the high computational efficiency can be achieved in the proposed approach. The numerical results of the proposed approach are compared not only with those using the dual boundary integral equations (D-BIE) and the BIE with numerical Green's functions (NGF) but also with those of the analytical solutions in literature. The effectiveness and the efficiency of the proposed approach is verified. Numerical examples are provided for the stress intensity factors of cracks, up to several thousands in number, in both the finite and infinite plates.
Resumo:
With the overwhelming increase in the amount of texts on the web, it is almost impossible for people to keep abreast of up-to-date information. Text mining is a process by which interesting information is derived from text through the discovery of patterns and trends. Text mining algorithms are used to guarantee the quality of extracted knowledge. However, the extracted patterns using text or data mining algorithms or methods leads to noisy patterns and inconsistency. Thus, different challenges arise, such as the question of how to understand these patterns, whether the model that has been used is suitable, and if all the patterns that have been extracted are relevant. Furthermore, the research raises the question of how to give a correct weight to the extracted knowledge. To address these issues, this paper presents a text post-processing method, which uses a pattern co-occurrence matrix to find the relation between extracted patterns in order to reduce noisy patterns. The main objective of this paper is not only reducing the number of closed sequential patterns, but also improving the performance of pattern mining as well. The experimental results on Reuters Corpus Volume 1 data collection and TREC filtering topics show that the proposed method is promising.
Resumo:
Purpose: Within the context of high global competitiveness, knowledge management (KM) has proven to be one of the major factors contributing to enhanced business outcomes. Furthermore, knowledge sharing (KS) is one of the most critical of all KM activities. From a manufacturing industry perspective, supply chain management (SCM) and product development process (PDP) activities, require a high proportion of company resources such as budget and manpower. Therefore, manufacturing companies are striving to strengthen SCM, PDP and KS activities in order to accelerate rates of manufacturing process improvement, ultimately resulting in higher levels of business performance (BP). A theoretical framework along with a number of hypotheses are proposed and empirically tested through correlation, factor and path analyses. Design/methodology/approach: A questionnaire survey was administered to a sample of electronic manufacturing companies operating in Taiwan to facilitate testing the proposed relationships. More than 170 respondents from 83 organisations responded to the survey. The study identified top management commitment and employee empowerment, supplier evaluation and selection, and design simplification and modular design as the key business activities that are strongly associated with the business performance. Findings: The empirical study supports that key manufacturing business activities (i.e., SCM, PDP, and KS) are positively associated with BP. The findings also evealed that some specific business activities such as SCMF1,PDPF2, and KSF1 have the strongest influencing power on particular business outcomes (i.e., BPF1 and BPF2) within the context of electronic manufacturing companies operating in Taiwan. Practical implications: The finding regarding the relationship between SCM and BP identified the essential role of supplier evaluation and selection in improving business competitiveness and long term performance. The process of forming knowledge in companies, such as creation, storage/retrieval, and transfer do not necessarily lead to enhanced business performance; only through effectively applying knowledge to the right person at the right time does. Originality/value: Based on this finding it is recommended that companies should involve suppliers in partnerships to continuously improve operations and enhance product design efforts, which would ultimately enhance business performance. Business performance depends more on an employee’s ability to turn knowledge into effective action.
Resumo:
This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.
Resumo:
As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. In order to enhance customer satisfaction and their shopping experiences, it has become important to analysis customers reviews to extract opinions on the products that they buy. Thus, Opinion Mining is getting more important than before especially in doing analysis and forecasting about customers’ behavior for businesses purpose. The right decision in producing new products or services based on data about customers’ characteristics means profit for organization/company. This paper proposes a new architecture for Opinion Mining, which uses a multidimensional model to integrate customers’ characteristics and their comments about products (or services). The key step to achieve this objective is to transfer comments (opinions) to a fact table that includes several dimensions, such as, customers, products, time and locations. This research presents a comprehensive way to calculate customers’ orientation for all possible products’ attributes.
Resumo:
Due to the development of XML and other data models such as OWL and RDF, sharing data is an increasingly common task since these data models allow simple syntactic translation of data between applications. However, in order for data to be shared semantically, there must be a way to ensure that concepts are the same. One approach is to employ commonly usedschemas—called standard schemas —which help guarantee that syntactically identical objects have semantically similar meanings. As a result of the spread of data sharing, there has been widespread adoption of standard schemas in a broad range of disciplines and for a wide variety of applications within a very short period of time. However, standard schemas are still in their infancy and have not yet matured or been thoroughly evaluated. It is imperative that the data management research community takes a closer look at how well these standard schemas have fared in real-world applications to identify not only their advantages, but also the operational challenges that real users face. In this paper, we both examine the usability of standard schemas in a comparison that spans multiple disciplines, and describe our first step at resolving some of these issues in our Semantic Modeling System. We evaluate our Semantic Modeling System through a careful case study of the use of standard schemas in architecture, engineering, and construction, which we conducted with domain experts. We discuss how our Semantic Modeling System can help the broader problem and also discuss a number of challenges that still remain.