364 resultados para Processing Technologies


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Web service based systems, new value-added Web services can be constructed by integrating existing Web services. A Web service may have many implementations, which are functionally identical, but have different Quality of Service (QoS) attributes, such as response time, price, reputation, reliability, availability and so on. Thus, a significant research problem in Web service composition is how to select an implementation for each of the component Web services so that the overall QoS of the composite Web service is optimal. This is so called QoS-aware Web service composition problem. In some composite Web services there are some dependencies and conflicts between the Web service implementations. However, existing approaches cannot handle the constraints. This paper tackles the QoS-aware Web service composition problem with inter service dependencies and conflicts using a penalty-based genetic algorithm (GA). Experimental results demonstrate the effectiveness and the scalability of the penalty-based GA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What are the ethical and political implications when the very foundations of life —things of awe and spiritual significance — are translated into products accessible to few people? This book critically analyses this historic recontextualisation. Through mediation — when meaning moves ‘from one text to another, from one discourse to another’ — biotechnology is transformed into analysable data and into public discourses. The unique book links biotechnology with media and citizenship. As with any ‘commodity’, biological products have been commodified. Because enormous speculative investment rests on this, risk will be understated and benefit will be overstated. Benefits will be unfairly distributed. Already, the bioprospecting of Southern megadiverse nations, legally sanctioned by U.S. property rights conventions, has led to wealth and health benefits in the North. Crucial to this development are biotechnological discourses that shift meanings from a “language of life” into technocratic discourses, infused with neo-liberal economic assumptions that promise progress and benefits for all. Crucial in this is the mass media’s representation of biotechnology for an audience with poor scientific literacy. Yet, even apparently benign biotechnology spawned by the Human Genome Project such as prenatal screening has eugenic possibilities, and genetic codes for illness are eagerly sought by insurance companies seeking to exclude certain people. These issues raise important questions about a citizenship that is founded on moral responsibility for the wellbeing of society now and into the future. After all, biotechnology is very much concerned with the essence of life itself. This book provides a space for alternative and dissident voices beyond the hype that surrounds biotechnology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

English has long been the subject where print text has reigned supreme. Increasingly in our networked and electronically connected world, however, we can be using digital technologies to create and respond to texts studied in English classrooms. The current approach to English includes the concept of ‘multiliteracies,’ which suggests that print texts alone are necessary but not sufficient’ (E.Q, 2000) and that literacy includes the flexible and sustainable mastery of a repertoire of practices. This also includes the decoding and deployment of media technologies (E.Q, 2000). This has become more possible in Australia as secondary students have increasing access to computers and online platforms at home and at school. With the advent of web 2.0., with its interactive platforms and free media making software, teachers and students can use this software to access information and emerging online literature in English covering a range of text types and new forms for authentic audiences and contexts. This chapter is concerned with responding to literary and mediated texts through the use of technologies. If we remain open to trying out new textual forms and see our digital ‘native students’ (Prensky, 2007) as our best resource, we can move beyond technophobia, become digital travellers’ ourselves and embrace new digital forms in our classrooms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural health monitoring (SHM) is the term applied to the procedure of monitoring a structure’s performance, assessing its condition and carrying out appropriate retrofitting so that it performs reliably, safely and efficiently. Bridges form an important part of a nation’s infrastructure. They deteriorate due to age and changing load patterns and hence early detection of damage helps in prolonging the lives and preventing catastrophic failures. Monitoring of bridges has been traditionally done by means of visual inspection. With recent developments in sensor technology and availability of advanced computing resources, newer techniques have emerged for SHM. Acoustic emission (AE) is one such technology that is attracting attention of engineers and researchers all around the world. This paper discusses the use of AE technology in health monitoring of bridge structures, with a special focus on analysis of recorded data. AE waves are stress waves generated by mechanical deformation of material and can be recorded by means of sensors attached to the surface of the structure. Analysis of the AE signals provides vital information regarding the nature of the source of emission. Signal processing of the AE waveform data can be carried out in several ways and is predominantly based on time and frequency domains. Short time Fourier transform and wavelet analysis have proved to be superior alternatives to traditional frequency based analysis in extracting information from recorded waveform. Some of the preliminary results of the application of these analysis tools in signal processing of recorded AE data will be presented in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Investigated human visual processing of simple two-colour patterns using a delayed match to sample paradigm with positron emission tomography (PET). This study is unique in that the authors specifically designed the visual stimuli to be the same for both pattern and colour recognition with all patterns being abstract shapes not easily verbally coded composed of two-colour combinations. The authors did this to explore those brain regions required for both colour and pattern processing and to separate those areas of activation required for one or the other. 10 right-handed male volunteers aged 18–35 yrs were recruited. The authors found that both tasks activated similar occipital regions, the major difference being more extensive activation in pattern recognition. A right-sided network that involved the inferior parietal lobule, the head of the caudate nucleus, and the pulvinar nucleus of the thalamus was common to both paradigms. Pattern recognition also activated the left temporal pole and right lateral orbital gyrus, whereas colour recognition activated the left fusiform gyrus and several right frontal regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several protocols for isolation of mycobacteria from water exist, but there is no established standard method. This study compared methods of processing potable water samples for the isolation of Mycobacterium avium and Mycobacterium intracellulare using spiked sterilized water and tap water decontaminated using 0.005% cetylpyridinium chloride (CPC). Samples were concentrated by centrifugation or filtration and inoculated onto Middlebrook 7H10 and 7H11 plates and Lowenstein-Jensen slants and into mycobacterial growth indicator tubes with or without polymyxin, azlocillin, nalidixic acid, trimethoprim, and amphotericin B. The solid media were incubated at 32°C, at 35°C, and at 35°C with CO2 and read weekly. The results suggest that filtration of water for the isolation of mycobacteria is a more sensitive method for concentration than centrifugation. The addition of sodium thiosulfate may not be necessary and may reduce the yield. Middlebrook M7H10 and 7H11 were equally sensitive culture media. CPC decontamination, while effective for reducing growth of contaminants, also significantly reduces mycobacterial numbers. There was no difference at 3 weeks between the different incubation temperatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose an unsupervised segmentation approach, named "n-gram mutual information", or NGMI, which is used to segment Chinese documents into n-character words or phrases, using language statistics drawn from the Chinese Wikipedia corpus. The approach alleviates the tremendous effort that is required in preparing and maintaining the manually segmented Chinese text for training purposes, and manually maintaining ever expanding lexicons. Previously, mutual information was used to achieve automated segmentation into 2-character words. The NGMI approach extends the approach to handle longer n-character words. Experiments with heterogeneous documents from the Chinese Wikipedia collection show good results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A laboratory scale twin screw extruder has been interfaced with a near infrared (NIR) spectrometer via a fibre optic link so that NIR spectra can be collected continuously during the small scale experimental melt state processing of polymeric materials. This system can be used to investigate melt state processes such as reactive extrusion, in real time, in order to explore the kinetics and mechanism of the reaction. A further advantage of the system is that it has the capability to measure apparent viscosity simultaneously which gives important additional information about molecular weight changes and polymer degradation during processing. The system was used to study the melt processing of a nanocomposite consisting of a thermoplastic polyurethane and an organically modified layered silicate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A successful urban management system for a Ubiquitous Eco City requires an integrated approach. This integration includes bringing together economic, socio-cultural and urban development with a well orchestrated, transparent and open decision making mechanism and necessary infrastructure and technologies. Rapidly developing information and telecommunication technologies and their platforms in the late 20th Century improves urban management and enhances the quality of life and place. Telecommunication technologies provide an important base for monitoring and managing activities over wired, wireless or fibre-optic networks. Particularly technology convergence creates new ways in which the information and telecommunication technologies are used. The 21st Century is an era where information has converged, in which people are able to access a variety of services, including internet and location based services, through multi-functional devices such as mobile phones and provides opportunities in the management of Ubiquitous Eco Cities. This paper discusses the recent developments in telecommunication networks and trends in convergence technologies and their implications on the management of Ubiquitous Eco Cities and how this technological shift is likely to be beneficial in improving the quality of life and place. The paper also introduces recent approaches on urban management systems, such as intelligent urban management systems, that are suitable for Ubiquitous Eco Cities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We argue that web service discovery technology should help the user navigate a complex problem space by providing suggestions for services which they may not be able to formulate themselves as (s)he lacks the epistemic resources to do so. Free text documents in service environments provide an untapped source of information for augmenting the epistemic state of the user and hence their ability to search effectively for services. A quantitative approach to semantic knowledge representation is adopted in the form of semantic space models computed from these free text documents. Knowledge of the user’s agenda is promoted by associational inferences computed from the semantic space. The inferences are suggestive and aim to promote human abductive reasoning to guide the user from fuzzy search goals into a better understanding of the problem space surrounding the given agenda. Experimental results are discussed based on a complex and realistic planning activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

De récentes recherches ont mis l’accent sur l’importance pour les nouvelles entreprises internationales de l’existence de ressources et de compétences spécifiques. La présente recherche, qui s’inscrit dans ce courant, montre en particulier l’importance du capital humain acquis par les entrepreneurs sur base de leur expérience internationale passée. Mais nous montrons en même temps que ces organisations sont soutenues par une intention délibérée d’internationalisation dès l’origine. Notre recherche empirique est basée sur l’analyse d’un échantillon de 466 nouvelles entreprises de hautes technologies anglaises et allemandes. Nous montrons que ce capital humain est un actif qui facilite la pénétration rapide des marchés étrangers, et plus encore quand l’entreprise nouvelle est accompagnée d’une intention stratégique délibérée d’internationalisation. Des conclusions similaires peuvent être étendues au niveau des ressources que l’entrepreneur consacre à la start-up : plus ces ressources sont importantes, plus le processus d’internationalisation tend à se faire à grande échelle ; et là aussi, l’influence de ces ressources est augmenté par l’intention stratégique d’internationalisation. Dans le cadre des études empiriques sur les born-globals (entreprises qui démarrent sur un marché globalisé), cette recherche fournit une des premières études empiriques reliant l’influence des conditions initiales de création aux probabilités de croissance internationale rapide.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

SoundCipher is a software library written in the Java language that adds important music and sound features to the Processing environment that is widely used by media artists and otherwise has an orientation toward computational graphics. This article introduces the SoundCipher library and its features, describes its influences and design intentions, and positions it within the field of computer music programming tools. SoundCipher enables the rich history of algorithmic music techniques to be accessible within one of today’s most popular media art platforms. It also provides an accessible means for learning to create algorithmic music and sound programs.