1000 resultados para perturbative methods
Resumo:
Background: The 2003 Bureau of Labor Statistics American Time Use Survey (ATUS) contains 438 distinct primary activity variables that can be analyzed with regard to how time is spent by Americans. The Compendium of Physical Activities is used to code physical activities derived from various surveys, logs, diaries, etc to facilitate comparison of coded intensity levels across studies. ------ ----- Methods: This paper describes the methods, challenges, and rationale for linking Compendium estimates of physical activity intensity (METs, metabolic equivalents) with all activities reported in the 2003 ATUS. ----- ----- Results: The assigned ATUS intensity levels are not intended to compute the energy costs of physical activity in individuals. Instead, they are intended to be used to identify time spent in activities broadly classified by type and intensity. This function will complement public health surveillance systems and aid in policy and health-promotion activities. For example, at least one of the future projects of this process is the descriptive epidemiology of time spent in common physical activity intensity categories. ----- ----- Conclusions: The process of metabolic coding of the ATUS by linking it with the Compendium of Physical Activities can make important contributions to our understanding of Americans’ time spent in health-related physical activity.
Resumo:
LiteSteel Beam (LSB) is a new cold-formed steel beam produced by OneSteel Australian Tube Mills (OATM). The new beam is effectively a channel section with two rectangular hollow flanges and a slender web, and is manufactured using patented dual electric resistance welding and automated roll-forming technologies. OATM is promoting the use of LSBs as flexural members in residential construction. When LSBs are used as back to back built-up sections, they are likely to improve their moment capacity. However, the research project conducted on the flexural behaviour of back to back built-up LSBs showed that the detrimental effects of lateral distortional buckling in single LSB members appear to remain with back to back built-up LSB members. The ultimate moment capacity of back to back LSB member is also affected by lateral distortional buckling failure. Therefore an investigation was conducted with an aim to develop suitable strength improvement methods, which are likely to mitigate lateral distortional buckling effects and hence improve the flexural strengths of back to back LSB members. This paper presents the details of this investigation, the results and recommendations for the most suitable and cost-effective method, which significantly improves the moment capacities of back to back LSB members.
Resumo:
There has been much conjecture of late as to whether the patentable subject matter standard contains a physicality requirement. The issue came to a head when the Federal Circuit introduced the machine-or-transformation test in In re Bilski and declared it to be the sole test for determining subject matter eligibility. Many commentators criticized the test, arguing that it is inconsistent with Supreme Court precedent and the need for the patent system to respond appropriately to all new and useful innovation in whatever form it arises. Those criticisms were vindicated when, on appeal, the Supreme Court in Bilski v. Kappos dispensed with any suggestion that the patentable subject matter test involves a physicality requirement. In this article, the issue is addressed from a normative perspective: it asks whether the patentable subject matter test should contain a physicality requirement. The conclusion reached is that it should not, because such a limitation is not an appropriate means of encouraging much of the valuable innovation we are likely to witness during the Information Age. It is contended that it is not only traditionally-recognized mechanical, chemical and industrial manufacturing processes that are patent eligible, but that patent eligibility extends to include non-machine implemented and non-physical methods that do not have any connection with a physical device and do not cause a physical transformation of matter. Concerns raised that there is a trend of overreaching commoditization or propertization, where the boundaries of patent law have been expanded too far, are unfounded since the strictures of novelty, nonobviousness and sufficiency of description will exclude undeserving subject matter from patentability. The argument made is that introducing a physicality requirement will have unintended adverse effects in various fields of technology, particularly those emerging technologies that are likely to have a profound social effect in the future.
Resumo:
Longitudinal panel studies of large, random samples of business start-ups captured at the pre-operational stage allow researchers to address core issues for entrepreneurship research, namely, the processes of creation of new business ventures as well as their antecedents and outcomes. Here, we perform a methods-orientated review of all 83 journal articles that have used this type of data set, our purpose being to assist users of current data sets as well as designers of new projects in making the best use of this innovative research approach. Our review reveals a number of methods issues that are largely particular to this type of research. We conclude that amidst exemplary contributions, much of the reviewed research has not adequately managed these methods challenges, nor has it made use of the full potential of this new research approach. Specifically, we identify and suggest remedies for context-specific and interrelated methods challenges relating to sample definition, choice of level of analysis, operationalization and conceptualization, use of longitudinal data and dealing with various types of problematic heterogeneity. In addition, we note that future research can make further strides towards full utilization of the advantages of the research approach through better matching (from either direction) between theories and the phenomena captured in the data, and by addressing some under-explored research questions for which the approach may be particularly fruitful.
Resumo:
Traditionally, transport disadvantage has been identified using accessibility analysis although the effectiveness of the accessibility planning approach to improving access to goods and services is not known. This paper undertakes a comparative assessment of measures of mobility, accessibility, and participation used to identify transport disadvantage using the concept of activity spaces. A 7 day activity-travel diary data for 89 individuals was collected from two case study areas located in rural Northern Ireland. A spatial analysis was conducted to select the case study areas using criteria derived from the literature. The criteria are related to the levels of area accessibility and area mobility which are known to influence the nature of transport disadvantage. Using the activity-travel diary data individuals weekly as well as day to day variations in activity-travel patterns were visualised. A model was developed using the ArcGIS ModelBuilder tool and was run to derive scores related to individual levels of mobility, accessibility, and participation in activities from the geovisualisation. Using these scores a multiple regression analysis was conducted to identify patterns of transport disadvantage. This study found a positive association between mobility and accessibility, between mobility and participation, and between accessibility and participation in activities. However, area accessibility and area mobility were found to have little impact on individual mobility, accessibility, and participation in activities. Income vis-àvis ´ car-ownership was found to have a significant impact on individual levels of mobility, and accessibility; whereas participation in activities were found to be a function of individual levels of income and their occupational status.
Comparison of standard image segmentation methods for segmentation of brain tumors from 2D MR images
Resumo:
In the analysis of medical images for computer-aided diagnosis and therapy, segmentation is often required as a preliminary step. Medical image segmentation is a complex and challenging task due to the complex nature of the images. The brain has a particularly complicated structure and its precise segmentation is very important for detecting tumors, edema, and necrotic tissues in order to prescribe appropriate therapy. Magnetic Resonance Imaging is an important diagnostic imaging technique utilized for early detection of abnormal changes in tissues and organs. It possesses good contrast resolution for different tissues and is, thus, preferred over Computerized Tomography for brain study. Therefore, the majority of research in medical image segmentation concerns MR images. As the core juncture of this research a set of MR images have been segmented using standard image segmentation techniques to isolate a brain tumor from the other regions of the brain. Subsequently the resultant images from the different segmentation techniques were compared with each other and analyzed by professional radiologists to find the segmentation technique which is the most accurate. Experimental results show that the Otsu’s thresholding method is the most suitable image segmentation method to segment a brain tumor from a Magnetic Resonance Image.
Resumo:
Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.
Resumo:
One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.
Resumo:
Binary classification methods can be generalized in many ways to handle multiple classes. It turns out that not all generalizations preserve the nice property of Bayes consistency. We provide a necessary and sufficient condition for consistency which applies to a large class of multiclass classification methods. The approach is illustrated by applying it to some multiclass methods proposed in the literature.