999 resultados para membrane computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Every day we are confronted with social interactions with other people. Our social life is characterised by norms that manifest as attitudinal and behavioural uniformities among people. With greater awareness about our social context, we can interact more efficiently. Any theory or model of human interaction that fails to include social concepts could be suggested to lack a critical element. This paper identifies the construct of social concepts that need to be supported by future context-aw are systems from an interdisciplinary perspective. It discusses the limitations of existing context-aware systems to support social psychology theories related to the identification and membership of social groups. We argue that social norms are among the core modelling concepts that future context-aware systems need to capture with the view to support and enhance social interactions. A detailed summary of social psychology theory relevant to social computing is given, followed by a formal definition of the process with each such norm advertised and acquired. The social concepts identified in this paper could be used to simulate agent interactions imbued with social norms or use ICT to facilitate, assist, enhance or understand social interactions. They also could be used in virtual communities modelling where the social awareness of a community as well as the process of joining and exiting a community are important.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most powerful known primitive in public-key cryptography is undoubtedly elliptic curve pairings. Upon their introduction just over ten years ago the computation of pairings was far too slow for them to be considered a practical option. This resulted in a vast amount of research from many mathematicians and computer scientists around the globe aiming to improve this computation speed. From the use of modern results in algebraic and arithmetic geometry to the application of foundational number theory that dates back to the days of Gauss and Euler, cryptographic pairings have since experienced a great deal of improvement. As a result, what was an extremely expensive computation that took several minutes is now a high-speed operation that takes less than a millisecond. This thesis presents a range of optimisations to the state-of-the-art in cryptographic pairing computation. Both through extending prior techniques, and introducing several novel ideas of our own, our work has contributed to recordbreaking pairing implementations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The move towards technological ubiquity is allowing a more idiosyncratic and dynamic working environment to emerge that may result in the restructuring of information communication technologies, and changes in their use through different user groups' actions. Taking a ‘practice’ lens to human agency, we explore the evolving roles of, and relationships between these user groups and their appropriation of emergent technologies by drawing upon Lamb and Kling's social actor framework. To illustrate our argument, we draw upon a study of a UK Fire Brigade that has introduced a variety of technologies in an attempt to move towards embracing mobile and ubiquitous computing. Our analysis of the enactment of such technologies reveals that Bystanders, a group yet to be taken as the central unit of analysis in information systems research, or considered in practice, are emerging as important actors. The research implications of our work relate to the need to further consider Bystanders in deployments other than those that are mobile and ubiquitous. For practice, we suggest that Bystanders require consideration in the systems development life cycle, particularly in terms of design and education in processes of use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main theme of this thesis is to allow the users of cloud services to outsource their data without the need to trust the cloud provider. The method is based on combining existing proof-of-storage schemes with distance-bounding protocols. Specifically, cloud customers will be able to verify the confidentiality, integrity, availability, fairness (or mutual non-repudiation), data freshness, geographic assurance and replication of their stored data directly, without having to rely on the word of the cloud provider.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: One of the challenges associated with cell-based therapies for repairing the retina is the development of suitable materials on which to grow and transplant retinal cells. Using the ARPE-19 cell line, we have previously demonstrated the feasibility of growing RPE-derived cells on membranes prepared from the silk protein fibroin. The present study was aimed at developing a porous, ultra-thin fibroin membrane that might better support development of apical-basal polarity in culture, and to extend this work to primary cultures of human RPE cells. Methods: Ultra-thin fibroin membranes were prepared using a highly polished casting table coated with Topas® (a cyclic olefin copolymer) and a 1:0.03 aqueous solution of fibroin and PEO (Mv 900 000 g/mol). Following drying, the membranes were water annealed to make them water-stable, washed in water to remove PEO, sterilised by treatment with 95% ethanol, and washed extensively in saline. Primary cultures containing human RPE cells were established from donor posterior eye cups and maintained in DMEM/F12 medium supplemented with 10% fetal bovine serum and antibiotics. First passage cultures were seeded onto fibroin membranes pre-coated with vitronectin and grown for 6 weeks in medium supplemented with 1% serum. Comparative cultures were established on porous 1.0 µm pore PET membrane (Millipore) and using ARPE-19 cells. Results: The fibroin membranes displayed an average thickness of 3 µm and contained numerous dimples/pore-like structures of up to 3-5 µm in diameter. The primary cultures predominantly contained pigmented epithelial cells, but mesenchymal cells (presumed fibroblasts) were also often present. Passaged cultures appeared to attach equally well to either fibroin or PET membranes. Over time cells on either material adopted a more cobblestoned morphology. Conclusions: Progress has been made towards developing a porous ultra-thin fibroin membrane that supports cultivation of RPE cells. Further studies are required to determine the degree of membrane permeability and RPE polarity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this paper is to provide an evolutionary perspective of cloud computing (CC) by integrating two previously disparate literatures: CC and information technology outsourcing (ITO). We review the literature and develop a framework that highlights the demand for the CC service, benefits, risks, as well as risk mitigation strategies that are likely to influence the success of the service. CC success in organisations and as a technology overall is a function of (i) the outsourcing decision and supplier selection, (ii) contractual and relational governance, and (iii) industry standards and legal framework. Whereas CC clients have little control over standards and/or the legal framework, they are able to influence other factors to maximize the benefits while limiting the risks. This paper provides guidelines for (potential) cloud computing users with respect to the outsourcing decision, vendor selection, service-level-agreements, and other issues that need to be addressed when opting for CC services. We contribute to the literature by providing an evolutionary and holistic view of CC that draws on the extensive literature and theory of ITO. We conclude the paper with a number of research paths that future researchers can follow to advance the knowledge in this field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This special issue of the Journal of Urban Technology brings together five articles that are based on presentations given at the Street Computing Workshop held on 24 November 2009 in Melbourne in conjunction with the Australian Computer- Human Interaction conference (OZCHI 2009). Our own article introduces the Street Computing vision and explores the potential, challenges, and foundations of this research trajectory. In order to do so, we first look at the currently available sources of information and discuss their link to existing research efforts. Section 2 then introduces the notion of Street Computing and our research approach in more detail. Section 3 looks beyond the core concept itself and summarizes related work in this field of interest. We conclude by introducing the papers that have been contributed to this special issue.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As proteins within cells are spatially organized according to their role, knowledge about protein localization gives insight into protein function. Here, we describe the LOPIT technique (localization of organelle proteins by isotope tagging) developed for the simultaneous and confident determination of the steady-state distribution of hundreds of integral membrane proteins within organelles. The technique uses a partial membrane fractionation strategy in conjunction with quantitative proteomics. Localization of proteins is achieved by measuring their distribution pattern across the density gradient using amine-reactive isotope tagging and comparing these patterns with those of known organelle residents. LOPIT relies on the assumption that proteins belonging to the same organelle will co-fractionate. Multivariate statistical tools are then used to group proteins according to the similarities in their distributions, and hence localization without complete centrifugal separation is achieved. The protocol requires approximately 3 weeks to complete and can be applied in a high-throughput manner to material from many varied sources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In eukaryotes, numerous complex sub-cellular structures exist. The majority of these are delineated by membranes. Many proteins are trafficked to these in order to be able to carry out their correct physiological function. Assigning the sub-cellular location of a protein is of paramount importance to biologists in the elucidation of its role and in the refinement of knowledge of cellular processes by tracing certain activities to specific organelles. Membrane proteins are a key set of proteins as these form part of the boundary of the organelles and represent many important functions such as transporters, receptors, and trafficking. They are, however, some of the most challenging proteins to work with due to poor solubility, a wide concentration range within the cell and inaccessibility to many of the tools employed in proteomics studies. This review focuses on membrane proteins with particular emphasis on sub-cellular localization in terms of methodologies that can be used to determine the accurate location of membrane proteins to organelles. We also discuss what is known about the membrane protein cohorts of major organelles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The topic of “the cloud” has attracted significant attention throughout the past few years (Cherry 2009; Sterling and Stark 2009) and, as a result, academics and trade journals have created several competing definitions of “cloud computing” (e.g., Motahari-Nezhad et al. 2009). Underpinning this article is the definition put forward by the US National Institute of Standards and Technology, which describes cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction” (Garfinkel 2011, p. 3). Despite the lack of consensus about definitions, however, there is broad agreement on the growing demand for cloud computing. Some estimates suggest that spending on cloudrelated technologies and services in the next few years may climb as high as USD 42 billion/year (Buyya et al. 2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Extracorporeal membrane oxygenation (ECMO) is used for severe lung and/or heart failure in intensive care units (ICU). The Prince Charles Hospital (TPCH) has one of the largest ECMO units in Australia. Its use rapidly increased during the H1N1 (“swine flu”) pandemic and an increase in pedal complications resulted. The relationship between ECMO and pedal complications has been described, particularly in children, though no strong data exists. This paper presents a case series of foot complications in patients having received ECMO treatment. Methods We present nine cases of severe foot complications resulting from patients receiving ECMO treatment at TPCH in 2009–2012. Results Case ages ranged from 16 - 58 years and three were male. Six cases had an unremarkable medical history prior to H1N1 or H1N2 infection, one had Cardiomyopathy, one had received a lung transplant, and one had multi-organ failure post-sepsis. Common medications prescribed included vasopressors, antibiotics, and sedatives. All cases showed signs of markedly impaired peripheral perfusion whilst on ECMO and seven developed increasing areas of foot necrosis. Outcomes include two bilateral below knee amputations, two multiple digital amputations, one Reflex Sympathetic Dystrophy Syndrome, three pressure injuries, and three deaths. Conclusion Necrosis of the feet appears to occur more readily in younger people requiring ECMO treatment than others in ICU. The authors are conducting further studies to investigate associations between particular infections, medical history, medications, or machine techniques and severe foot complications. Some of these early results will also be presented at this conference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Successful control of sexually transmitted diseases (STDs) through vaccination will require the development of vaccine strategies that target protective immunity to both the female and male reproductive tracts (MRT). In the male, the immune privileged nature of the male reproductive tract provides a barrier to entry of serum immunoglobulins into the male reproductive ducts, thereby preventing the induction of protective immunity using conventional injectable vaccination techniques. In this study we investigated the potential of intranasal (IN) immunization to elicit anti-chlamydial immunity in BALB/c male mice. Intranasal immunization with Chlamydia muridarum major outer membrane protein (MOMP) admixed with cholera toxin (CT) resulted in high levels of MOMP-specific IgA in prostatic fluids (PF) and MOMP-specific IgA-secreting cells in the prostate. Prostatic fluid IgA inhibited in vitro infection of McCoy cells with C. muridarum. Using RT-PCR we also show that mRNA for the polymeric immunoglobulin receptor (PIgR), which transports IgA across mucosal epithelia, is expressed only in the prostate but not in other regions of the male reproductive ducts upstream of the prostate. These data suggest that using intranasal immunization to target IgA to the prostate may protect males against STDs while at the same time maintaining the state of immune privilege within the MRT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.