823 resultados para Links and link-motion.
Resumo:
Background: left ventricular wall motion on 2d echo (2de) is usually scored visually. we sought to examine the determinants of visually assessed wall motion scoring on 2de by comparison with myocardial thickening quantified on MRI. Methods: using a 16 segment model, we studied 287 segments in 30 patients aged 61+/ -11 years (6 female), with ischaemic LV dysfunction (defined by at least 2 segments dysfunctional on 2de). 2de was performed in 5 views and wall motion scores (WMS) assigned: 1 (normal) 103 segments, 2 (hypokinetic) 93 segments, 3 (akinetic) 87 segments. MRI was used to measure end systolic wall thickness (ESWT), end diastolic wall thickness (EDWT) and percentage systolic wall thickening (SWT%) in the plane of the 2de and to assess WMS in the same planes visually. No patient had a clinical ischemic event between the tests. Results: visual assessment of wall motion by 2de and MRI showed moderate agreement (kappa = 0.425). Resting 2de wall motion correlated significantly (p
Resumo:
Streaming video application requires high security as well as high computational performance. In video encryption, traditional selective algorithms have been used to partially encrypt the relatively important data in order to satisfy the streaming performance requirement. Most video selective encryption algorithms are inherited from still image encryption algorithms, the encryption on motion vector data is not considered. The assumption is that motion vector data are not as important as pixel image data. Unfortunately, in some cases, motion vector itself may be sufficient enough to leak out useful video information. Normally motion vector data consume over half of the whole video stream bandwidth, neglecting their security may be unwise. In this paper, we target this security problem and illustrate attacks at two different levels that can restore useful video information using motion vectors only. Further, an information analysis is made and a motion vector information model is built. Based on this model, we describe a new motion vector encryption algorithm called MVEA. We show the experimental results of MVEA. The security strength and performance of the algorithm are also evaluated.
Resumo:
In 2001/02 five case study communities in both metropolitan and regional urban locations in Australia were chosen as test sites to develop measures of community strength on four domains: natural capital; produced economic capital; human capital; and social and institutional capital. Secondary data sources were used to develop measures on the first three domains. For the fourth domain social and institutional capital primary data collection was undertaken through sample surveys of households. A structured approach was devised. This involved developing a survey instrument using scaled items relating to four elements: formal norms; informal norms; formal structures; and informal structures which embrace the concepts of trust, reciprocity, bonds, bridges, links and networks in the interaction of individuals with their community inherent in the notion social capital. Exploratory principal components analysis was used to identify factors that measure those aspects of social and institutional capital, with confirmatory analysis conducted using Cronbach's Alpha. This enabled the construction of four primary scales and 15 sub-scales as a tool for measuring social and institutional capital. Further analyses reveals that two measures anomie and perceived quality of life and wellbeing relate to certain primary scales of social capital.
Resumo:
Ontologies have become a key component in the Semantic Web and Knowledge management. One accepted goal is to construct ontologies from a domain specific set of texts. An ontology reflects the background knowledge used in writing and reading a text. However, a text is an act of knowledge maintenance, in that it re-enforces the background assumptions, alters links and associations in the ontology, and adds new concepts. This means that background knowledge is rarely expressed in a machine interpretable manner. When it is, it is usually in the conceptual boundaries of the domain, e.g. in textbooks or when ideas are borrowed into other domains. We argue that a partial solution to this lies in searching external resources such as specialized glossaries and the internet. We show that a random selection of concept pairs from the Gene Ontology do not occur in a relevant corpus of texts from the journal Nature. In contrast, a significant proportion can be found on the internet. Thus, we conclude that sources external to the domain corpus are necessary for the automatic construction of ontologies.
Resumo:
A review of the extant literature concludes that market-driven intangibles and innovations are increasingly considered to be the most critical firm-specific resources, but also finds a lack of elaboration of which types of these resources are most important. In this paper, we incorporate these observations into a conceptual model and link it to highly developed institutional settings for the model evaluation. From the point of view of firm revenue management, we can anticipate that performance advantages created through deployment of intellectual and relational capital in marketing and innovation are more likely to be superior. In essence, they constitute the integration of organisational intangibles both in cognitive and behavioural level to create an idiosyncratic combination for each firm. Our research findings show feasible paths for sharpening the edge of market-driven intangibles and innovations. We discuss the key results for research and practice.
Resumo:
Respiration is a complex activity. If the relationship between all neurological and skeletomuscular interactions was perfectly understood, an accurate dynamic model of the respiratory system could be developed and the interaction between different inputs and outputs could be investigated in a straightforward fashion. Unfortunately, this is not the case and does not appear to be viable at this time. In addition, the provision of appropriate sensor signals for such a model would be a considerable invasive task. Useful quantitative information with respect to respiratory performance can be gained from non-invasive monitoring of chest and abdomen motion. Currently available devices are not well suited in application for spirometric measurement for ambulatory monitoring. A sensor matrix measurement technique is investigated to identify suitable sensing elements with which to base an upper body surface measurement device that monitors respiration. This thesis is divided into two main areas of investigation; model based and geometrical based surface plethysmography. In the first instance, chapter 2 deals with an array of tactile sensors that are used as progression of existing and previously investigated volumetric measurement schemes based on models of respiration. Chapter 3 details a non-model based geometrical approach to surface (and hence volumetric) profile measurement. Later sections of the thesis concentrate upon the development of a functioning prototype sensor array. To broaden the application area the study has been conducted as it would be fore a generically configured sensor array. In experimental form the system performance on group estimation compares favourably with existing system on volumetric performance. In addition provides continuous transient measurement of respiratory motion within an acceptable accuracy using approximately 20 sensing elements. Because of the potential size and complexity of the system it is possible to deploy it as a fully mobile ambulatory monitoring device, which may be used outside of the laboratory. It provides a means by which to isolate coupled physiological functions and thus allows individual contributions to be analysed separately. Thus facilitating greater understanding of respiratory physiology and diagnostic capabilities. The outcome of the study is the basis for a three-dimensional surface contour sensing system that is suitable for respiratory function monitoring and has the prospect with future development to be incorporated into a garment based clinical tool.
Resumo:
Neuroimaging studies of cortical activation during image transformation tasks have shown that mental rotation may rely on similar brain regions as those underlying visual perceptual mechanisms. The V5 complex, which is specialised for visual motion, is one region that has been implicated. We used functional magnetic resonance imaging (fMRI) to investigate rotational and linear transformation of stimuli. Areas of significant brain activation were identified for each of the primary mental transformation tasks in contrast to its own perceptual reference task which was cognitively matched in all respects except for the variable of interest. Analysis of group data for perception of rotational and linear motion showed activation in areas corresponding to V5 as defined in earlier studies. Both rotational and linear mental transformations activated Brodman Area (BA) 19 but did not activate V5. An area within the inferior temporal gyrus, representing an inferior satellite area of V5, was activated by both the rotational perception and rotational transformation tasks, but showed no activation in response to linear motion perception or transformation. The findings demonstrate the extent to which neural substrates for image transformation and perception overlap and are distinct as well as revealing functional specialisation within perception and transformation processing systems.
Resumo:
This paper investigates the role of absorptive capacity in the diffusion of global technology with sector and firm heterogeneity. We construct the FDI-intensity weighted global R&D stock for each industry and link it to Chinese firm-level panel data relating to 53,981 firms over the period 2001-2005. Non-parametric frontier analysis is employed to explore how absorptive capacity affects technical change and catch-up in the presence of global knowledge spillovers. We find that R&D activities and training at individual firms serve as an effective source of absorptive capability. The contribution of absorptive capacity varies according to the type of FDI and the extent of openness.
Resumo:
Technological capabilities in Chinese manufacturing have been transformed in the last three decades. However, the extent to which and how domestic market oriented state owned enterprises (SOEs) have developed their capabilities remain important questions. The East Asian latecomer model has been adapted to study six Chinese SOEs in the automotive, steel and machine tools sectors to assess capability levels attained and the role of external sources and internal efforts in developing them. All six enterprises demonstrate high competence in operating established technology, managing investment and making product and process improvements but differ in innovative capability. While the East Asian latecomer model in which linking, leveraging and learning explain technological capability development is relevant for the companies studied, it needs to be adapted for Chinese SOEs to take account of types of external links and leverage of enterprises, the role of government, enterprise level management motives and means of financing development.
Resumo:
Humans consciously and subconsciously establish various links, emerge semantic images and reason in mind, learn linking effect and rules, select linked individuals to interact, and form closed loops through links while co-experiencing in multiple spaces in lifetime. Machines are limited in these abilities although various graph-based models have been used to link resources in the cyber space. The following are fundamental limitations of machine intelligence: (1) machines know few links and rules in the physical space, physiological space, psychological space, socio space and mental space, so it is not realistic to expect machines to discover laws and solve problems in these spaces; and, (2) machines can only process pre-designed algorithms and data structures in the cyber space. They are limited in ability to go beyond the cyber space, to learn linking rules, to know the effect of linking, and to explain computing results according to physical, physiological, psychological and socio laws. Linking various spaces will create a complex space — the Cyber-Physical-Physiological-Psychological-Socio-Mental Environment CP3SME. Diverse spaces will emerge, evolve, compete and cooperate with each other to extend machine intelligence and human intelligence. From multi-disciplinary perspective, this paper reviews previous ideas on various links, introduces the concept of cyber-physical society, proposes the ideal of the CP3SME including its definition, characteristics, and multi-disciplinary revolution, and explores the methodology of linking through spaces for cyber-physical-socio intelligence. The methodology includes new models, principles, mechanisms, scientific issues, and philosophical explanation. The CP3SME aims at an ideal environment for humans to live and work. Exploration will go beyond previous ideals on intelligence and computing.
Servitization and enterprization in the construction industry:the case of a specialist subcontractor
Resumo:
The current economic climate and a continuing fall in output of the UK construction industry has led to falling prices and margins particularly affecting those lower down in the supply chain such as specialist subcontractors. Coen Ltd. is one such company based in the West Midlands. Faced with a need to up its game it has embarked on a business improvement programme concentrating on better operational efficiency, building stronger client relationships and delivering value added services. Lacking appropriate internal resources Coen has joined with Aston Business School in a 2 year ERDF sponsored project to fulfil the transformation programme. The paper will describe the evolution of product- service offerings in construction and link this with the work being carried out at Coen with Aston and outline the anticipated outcomes.
Resumo:
This paper examines UK and US primary care doctors' decision-making about older (aged 75 years) and midlife (aged 55 years) patients presenting with coronary heart disease (CHD). Using an analytic approach based on conceptualising clinical decision-making as a classification process, it explores the ways in which doctors' cognitive processes contribute to ageism in health-care at three key decision points during consultations. In each country, 56 randomly selected doctors were shown videotaped vignettes of actors portraying patients with CHD. The patients' ages (55 or 75 years), gender, ethnicity and social class were varied systematically. During the interviews, doctors gave free-recall accounts of their decision-making. The results do not establish that there was substantial ageism in the doctors' decisions, but rather suggest that diagnostic processes pay insufficient attention to the significance of older patients' age and its association with the likelihood of co-morbidity and atypical disease presentations. The doctors also demonstrated more limited use of 'knowledge structures' when diagnosing older than midlife patients. With respect to interventions, differences in the national health-care systems rather than patients' age accounted for the differences in doctors' decisions. US doctors were significantly more concerned about the potential for adverse outcomes if important diagnoses were untreated, while UK general practitioners cited greater difficulty in accessing diagnostic tests.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.
Resumo:
* The research was supported by INTAS 00-397 and 00-626 Projects.