826 resultados para 2D barcode based authentication scheme


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – This paper aims to contribute towards understanding how safety knowledge can be elicited from railway experts for the purposes of supporting effective decision-making. Design/methodology/approach – A consortium of safety experts from across the British railway industry is formed. Collaborative modelling of the knowledge domain is used as an approach to the elicitation of safety knowledge from experts. From this, a series of knowledge models is derived to inform decision-making. This is achieved by using Bayesian networks as a knowledge modelling scheme, underpinning a Safety Prognosis tool to serve meaningful prognostics information and visualise such information to predict safety violations. Findings – Collaborative modelling of safety-critical knowledge is a valid approach to knowledge elicitation and its sharing across the railway industry. This approach overcomes some of the key limitations of existing approaches to knowledge elicitation. Such models become an effective tool for prediction of safety cases by using railway data. This is demonstrated using passenger–train interaction safety data. Practical implications – This study contributes to practice in two main directions: by documenting an effective approach to knowledge elicitation and knowledge sharing, while also helping the transport industry to understand safety. Social implications – By supporting the railway industry in their efforts to understand safety, this research has the potential to benefit railway passengers, staff and communities in general, which is a priority for the transport sector. Originality/value – This research applies a knowledge elicitation approach to understanding safety based on collaborative modelling, which is a novel approach in the context of transport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider a multiuser downlink wiretap network consisting of one base station (BS) equipped with AA antennas, NB single-antenna legitimate users, and NE single-antenna eavesdroppers over Nakagami-m fading channels. In particular, we introduce a joint secure transmission scheme that adopts transmit antenna selection (TAS) at the BS and explores threshold-based selection diversity (tSD) scheduling over legitimate users to achieve a good secrecy performance while maintaining low implementation complexity. More specifically, in an effort to quantify the secrecy performance of the considered system, two practical scenarios are investigated, i.e., Scenario I: the eavesdropper’s channel state information (CSI) is unavailable at the BS, and Scenario II: the eavesdropper’s CSI is available at the BS. For Scenario I, novel exact closed-form expressions of the secrecy outage probability are derived, which are valid for general networks with an arbitrary number of legitimate users, antenna configurations, number of eavesdroppers, and the switched threshold. For Scenario II, we take into account the ergodic secrecy rate as the principle performance metric, and derive novel closed-form expressions of the exact ergodic secrecy rate. Additionally, we also provide simple and asymptotic expressions for secrecy outage probability and ergodic secrecy rate under two distinct cases, i.e., Case I: the legitimate user is located close to the BS, and Case II: both the legitimate user and eavesdropper are located close to the BS. Our important findings reveal that the secrecy diversity order is AAmA and the slope of secrecy rate is one under Case I, while the secrecy diversity order and the slope of secrecy rate collapse to zero under Case II, where the secrecy performance floor occurs. Finally, when the switched threshold is carefully selected, the considered scheduling scheme outperforms other well known existing schemes in terms of the secrecy performance and complexity tradeoff

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we synthesize large-area thin films of a conjugated, imine-based, two-dimensional covalent organic framework at the solution/air interface. Thicknesses between ∼2-200 nm are achieved. Films can be transferred to any desired substrate by lifting from underneath, enabling their use as the semiconducting active layer in field-effect transistors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse concerne la modélisation des interactions fluide-structure et les méthodes numériques qui s’y rattachent. De ce fait, la thèse est divisée en deux parties. La première partie concerne l’étude des interactions fluide-structure par la méthode des domaines fictifs. Dans cette contribution, le fluide est incompressible et laminaire et la structure est considérée rigide, qu’elle soit immobile ou en mouvement. Les outils que nous avons développés comportent la mise en oeuvre d’un algorithme fiable de résolution qui intégrera les deux domaines (fluide et solide) dans une formulation mixte. L’algorithme est basé sur des techniques de raffinement local adaptatif des maillages utilisés permettant de mieux séparer les éléments du milieu fluide de ceux du solide que ce soit en 2D ou en 3D. La seconde partie est l’étude des interactions mécaniques entre une structure flexible et un fluide incompressible. Dans cette contribution, nous proposons et analysons des méthodes numériques partitionnées pour la simulation de phénomènes d’interaction fluide-structure (IFS). Nous avons adopté à cet effet, la méthode dite «arbitrary Lagrangian-Eulerian» (ALE). La résolution fluide est effectuée itérativement à l’aide d’un schéma de type projection et la structure est modélisée par des modèles hyper élastiques en grandes déformations. Nous avons développé de nouvelles méthodes de mouvement de maillages pour aboutir à de grandes déformations de la structure. Enfin, une stratégie de complexification du problème d’IFS a été définie. La modélisation de la turbulence et des écoulements à surfaces libres ont été introduites et couplées à la résolution des équations de Navier-Stokes. Différentes simulations numériques sont présentées pour illustrer l’efficacité et la robustesse de l’algorithme. Les résultats numériques présentés attestent de la validité et l’efficacité des méthodes numériques développées.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alkali tantalates and niobates, including K(Ta / Nb)O3, Li(Ta / Nb)O3 and Na(Ta / Nb)O3, are a very promising ferroic family of lead-free compounds with perovskite-like structures. Their versatile properties make them potentially interesting for current and future application in microelectronics, photocatalysis, energy and biomedics. Among them potassium tantalate, KTaO3 (KTO), has been raising interest as an alternative for the well-known strontium titanate, SrTiO3 (STO). KTO is a perovskite oxide with a quantum paraelectric behaviour when electrically stimulated and a highly polarizable lattice, giving opportunity to tailor its properties via external or internal stimuli. However problems related with the fabrication of either bulk or 2D nanostructures makes KTO not yet a viable alternative to STO. Within this context and to contribute scientifically to the leverage tantalate based compounds applications, the main goals of this thesis are: i) to produce and characterise thin films of alkali tantalates by chemical solution deposition on rigid Si based substrates, at reduced temperatures to be compatible with Si technology, ii) to fulfil scientific knowledge gaps in these relevant functional materials related to their energetics and ii) to exploit alternative applications for alkali tantalates, as photocatalysis. In what concerns the synthesis attention was given to the understanding of the phase formation in potassium tantalate synthesized via distinct routes, to control the crystallization of desired perovskite structure and to avoid low temperature pyrochlore or K-deficient phases. The phase formation process in alkali tantalates is far from being deeply analysed, as in the case of Pb-containing perovskites, therefore the work was initially focused on the process-phase relationship to identify the driving forces responsible to regulate the synthesis. Comparison of phase formation paths in conventional solid-state reaction and sol-gel method was conducted. The structural analyses revealed that intermediate pyrochlore K2Ta2O6 structure is not formed at any stage of the reaction using conventional solid-state reaction. On the other hand in the solution based processes, as alkoxide-based route, the crystallization of the perovskite occurs through the intermediate pyrochlore phase; at low temperatures pyrochlore is dominant and it is transformed to perovskite at >800 °C. The kinetic analysis carried out by using Johnson-MehlAvrami-Kolmogorow model and quantitative X-ray diffraction (XRD) demonstrated that in sol-gel derived powders the crystallization occurs in two stages: i) at early stage of the reaction dominated by primary nucleation, the mechanism is phase-boundary controlled, and ii) at the second stage the low value of Avrami exponent, n ~ 0.3, does not follow any reported category, thus not permitting an easy identification of the mechanism. Then, in collaboration with Prof. Alexandra Navrotsky group from the University of California at Davis (USA), thermodynamic studies were conducted, using high temperature oxide melt solution calorimetry. The enthalpies of formation of three structures: pyrochlore, perovskite and tetragonal tungsten bronze K6Ta10.8O30 (TTB) were calculated. The enthalpies of formation from corresponding oxides, ∆Hfox, for KTaO3, KTa2.2O6 and K6Ta10.8O30 are -203.63 ± 2.84 kJ/mol, - 358.02 ± 3.74 kJ/mol, and -1252.34 ± 10.10 kJ/mol, respectively, whereas from elements, ∆Hfel, for KTaO3, KTa2.2O6 and K6Ta10.8O30 are -1408.96 ± 3.73 kJ/mol, -2790.82 ± 6.06 kJ/mol, and -13393.04 ± 31.15 kJ/mol, respectively. The possible decomposition reactions of K-deficient KTa2.2O6 pyrochlore to KTaO3 perovskite and Ta2O5 (reaction 1) or to TTB K6Ta10.8O30 and Ta2O5 (reaction 2) were proposed, and the enthalpies were calculated to be 308.79 ± 4.41 kJ/mol and 895.79 ± 8.64 kJ/mol for reaction 1 and reaction 2, respectively. The reactions are strongly endothermic, indicating that these decompositions are energetically unfavourable, since it is unlikely that any entropy term could override such a large positive enthalpy. The energetic studies prove that pyrochlore is energetically more stable phase than perovskite at low temperature. Thus, the local order of the amorphous precipitates drives the crystallization into the most favourable structure that is the pyrochlore one with similar local organization; the distance between nearest neighbours in the amorphous or short-range ordered phase is very close to that in pyrochlore. Taking into account the stoichiometric deviation in KTO system, the selection of the most appropriate fabrication / deposition technique in thin films technology is a key issue, especially concerning complex ferroelectric oxides. Chemical solution deposition has been widely reported as a processing method to growth KTO thin films, but classical alkoxide route allows to crystallize perovskite phase at temperatures >800 °C, while the temperature endurance of platinized Si wafers is ~700 °C. Therefore, alternative diol-based routes, with distinct potassium carboxylate precursors, was developed aiming to stabilize the precursor solution, to avoid using toxic solvents and to decrease the crystallization temperature of the perovskite phase. Studies on powders revealed that in the case of KTOac (solution based on potassium acetate), a mixture of perovskite and pyrochlore phases is detected at temperature as low as 450 °C, and gradual transformation into monophasic perovskite structure occurs as temperature increases up to 750 °C, however the desired monophasic KTaO3 perovskite phase is not achieved. In the case of KTOacac (solution with potassium acetylacetonate), a broad peak is detected at temperatures <650 °C, characteristic of amorphous structures, while at higher temperatures diffraction lines from pyrochlore and perovskite phases are visible and a monophasic perovskite KTaO3 is formed at >700 °C. Infrared analysis indicated that the differences are due to a strong deformation of the carbonate-based structures upon heating. A series of thin films of alkali tantalates were spin-coated onto Si-based substrates using diol-based routes. Interestingly, monophasic perovskite KTaO3 films deposited using KTOacac solution were obtained at temperature as low as 650 °C; films were annealed in rapid thermal furnace in oxygen atmosphere for 5 min with heating rate 30 °C/sec. Other compositions of the tantalum based system as LiTaO3 (LTO) and NaTaO3 (NTO), were successfully derived as well, onto Si substrates at 650 °C as well. The ferroelectric character of LTO at room temperature was proved. Some of dielectric properties of KTO could not be measured in parallel capacitor configuration due to either substrate-film or filmelectrode interfaces. Thus, further studies have to be conducted to overcome this issue. Application-oriented studies have also been conducted; two case studies: i) photocatalytic activity of alkali tantalates and niobates for decomposition of pollutant, and ii) bioactivity of alkali tantalate ferroelectric films as functional coatings for bone regeneration. Much attention has been recently paid to develop new type of photocatalytic materials, and tantalum and niobium oxide based compositions have demonstrated to be active photocatalysts for water splitting due to high potential of the conduction bands. Thus, various powders of alkali tantalates and niobates families were tested as catalysts for methylene blue degradation. Results showed promising activities for some of the tested compounds, and KNbO3 is the most active among them, reaching over 50 % degradation of the dye after 7 h under UVA exposure. However further modifications of powders can improve the performance. In the context of bone regeneration, it is important to have platforms that with appropriate stimuli can support the attachment and direct the growth, proliferation and differentiation of the cells. In lieu of this here we exploited an alternative strategy for bone implants or repairs, based on charged mediating signals for bone regeneration. This strategy includes coating metallic 316L-type stainless steel (316L-SST) substrates with charged, functionalized via electrical charging or UV-light irradiation, ferroelectric LiTaO3 layers. It was demonstrated that the formation of surface calcium phosphates and protein adsorption is considerably enhanced for 316L-SST functionalized ferroelectric coatings. Our approach can be viewed as a set of guidelines for the development of platforms electrically functionalized that can stimulate tissue regeneration promoting direct integration of the implant in the host tissue by bone ingrowth and, hence contributing ultimately to reduce implant failure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A micro gas sensor has been developed by our group for the detection of organo-phosphate vapors using an aqueous oxime solution. The analyte diffuses from the high flow rate gas stream through a porous membrane to the low flow rate aqueous phase. It reacts with the oxime PBO (1-Phenyl-1,2,3,-butanetrione 2-oxime) to produce cyanide ions, which are then detected electrochemically from the change in solution potential. Previous work on this oxime based electrochemistry indicated that the optimal buffer pH for the aqueous solution was approximately 10. A basic environment is needed for the oxime anion to form and the detection reaction to take place. At this specific pH, the potential response of the sensor to an analyte (such as acetic anhydride) is maximized. However, sensor response slowly decreases as the aqueous oxime solution ages, by as much as 80% in first 24 hours. The decrease in sensor response is due to cyanide which is produced during the oxime degradation process, as evidenced by the cyanide selective electrode. Solid phase micro-extraction carried out on the oxime solution found several other possible degradation products, including acetic acid, N-hydroxy benzamide, benzoic acid, benzoyl cyanide, 1-Phenyl 1,3-butadione, 2-isonitrosoacetophenone and an imine derived from the oxime. It was concluded that degradation occurred through nucleophilic attack by a hydroxide or oxime anion to produce cyanide, as well as a nitrogen atom rearrangement similar to Beckmann rearrangement. The stability of the oxime in organic solvents is most likely due to the lack of water, and specifically hydroxide ions. The reaction between oxime and organo-phosphate to produce cyanide ions requires hydroxide ions, and therefore pure organic solvents are not compatible with the current micro-sensor electrochemistry. By combining a concentrated organic oxime solution with the basic aqueous buffer just prior to being used in the detection process, oxime degradation can be avoided while preserving the original electrochemical detection scheme. Based on beaker cell experiments with selective cyanide sensitive electrodes, ethanol was chosen as the best organic solvent due to its stabilizing effect on the oxime, minimal interference with the aqueous electrochemistry, and compatibility with the current microsensor material (PMMA). Further studies showed that ethanol had a small effect on micro-sensor performance by reducing the rate of cyanide production and decreasing the overall response time. To avoid incomplete mixing of the aqueous and organic solutions, they were pre-mixed externally at a 10:1 ratio, respectively. To adapt the microsensor design to allow for mixing to take place within the device, a small serpentine channel component was fabricated with the same dimensions and material as the original sensor. This allowed for seamless integration of the microsensor with the serpentine mixing channel. Mixing in the serpentine microchannel takes place via diffusion. Both detector potential response and diffusional mixing improve with increased liquid residence time, and thus decreased liquid flowrate. Micromixer performance was studies at a 10:1 aqueous buffer to organic solution flow rate ratio, for a total rate of 5.5 μL/min. It was found that the sensor response utilizing the integrated micromixer was nearly identical to the response when the solutions were premixed and fed at the same rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seafood products fraud, the misrepresentation of them, have been discovered all around the world in different forms as false labeling, species substitution, short-weighting or over glazing in order to hide the correct identity, origin or weight of the seafood products. Due to the value of seafood products such as canned tuna, swordfish or grouper, these species are the subject of the commercial fraud is mainly there placement of valuable species with other little or no value species. A similar situation occurs with the shelled shrimp or shellfish that are reduced into pieces for the commercialization. Food fraud by species substitution is an emerging risk given the increasingly global food supply chain and the potential food safety issues. Economic food fraud is committed when food is deliberately placed on the market, for financial gain deceiving consumers (Woolfe, M. & Primrose, S. 2004). As a result of the increased demand and the globalization of the seafood supply, more fish species are encountered in the market. In this scenary, it becomes essential to unequivocally identify the species. The traditional taxonomy, based primarily on identification keys of species, has shown a number of limitations in the use of the distinctive features in many animal taxa, amplified when fish, crustacean or shellfish are commercially transformed. Many fish species show a similar texture, thus the certification of fish products is particularly important when fishes have undergone procedures which affect the overall anatomical structure, such as heading, slicing or filleting (Marko et al., 2004). The absence of morphological traits, a main characteristic usually used to identify animal species, represents a challenge and molecular identification methods are required. Among them, DNA-based methods are more frequently employed for food authentication (Lockley & Bardsley, 2000). In addition to food authentication and traceability, studies of taxonomy, population and conservation genetics as well as analysis of dietary habits and prey selection, also rely on genetic analyses including the DNA barcoding technology (Arroyave & Stiassny, 2014; Galimberti et al., 2013; Mafra, Ferreira, & Oliveira, 2008; Nicolé et al., 2012; Rasmussen & Morrissey, 2008), consisting in PCR amplification and sequencing of a COI mitochondrial gene specific region. The system proposed by P. Hebert et al. (2003) locates inside the mitochondrial COI gene (cytochrome oxidase subunit I) the bioidentification system useful in taxonomic identification of species (Lo Brutto et al., 2007). The COI region, used for genetic identification - DNA barcode - is short enough to allow, with the current technology, to decode sequence (the pairs of nucleotide bases) in a single step. Despite, this region only represents a tiny fraction of the mitochondrial DNA content in each cell, the COI region has sufficient variability to distinguish the majority of species among them (Biondo et al. 2016). This technique has been already employed to address the demand of assessing the actual identity and/or provenance of marketed products, as well as to unmask mislabelling and fraudulent substitutions, difficult to detect especially in manufactured seafood (Barbuto et al., 2010; Galimberti et al., 2013; Filonzi, Chiesa, Vaghi, & Nonnis Marzano, 2010). Nowadays,the research concerns the use of genetic markers to identify not only the species and/or varieties of fish, but also to identify molecular characters able to trace the origin and to provide an effective control tool forproducers and consumers as a supply chain in agreementwith local regulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Target space duality is one of the most profound properties of string theory. However it customarily requires that the background fields satisfy certain invariance conditions in order to perform it consistently; for instance the vector fields along the directions that T-duality is performed have to generate isometries. In the present paper we examine in detail the possibility to perform T-duality along non-isometric directions. In particular, based on a recent work of Kotov and Strobl, we study gauged 2D sigma models where gauge invariance for an extended set of gauge transformations imposes weaker constraints than in the standard case, notably the corresponding vector fields are not Killing. This formulation enables us to follow a procedure analogous to the derivation of the Buscher rules and obtain two dual models, by integrating out once the Lagrange multipliers and once the gauge fields. We show that this construction indeed works in non-trivial cases by examining an explicit class of examples based on step 2 nilmanifolds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By proposing a numerical based method on PCA-ANFIS(Adaptive Neuro-Fuzzy Inference System), this paper is focusing on solving the problem of uncertain cycle of water injection in the oilfield. As the dimension of original data is reduced by PCA, ANFIS can be applied for training and testing the new data proposed by this paper. The correctness of PCA-ANFIS models are verified by the injection statistics data collected from 116 wells inside an oilfield, the average absolute error of testing is 1.80 months. With comparison by non-PCA based models which average error is 4.33 months largely ahead of PCA-ANFIS based models, it shows that the testing accuracy has been greatly enhanced by our approach. With the conclusion of the above testing, the PCA-ANFIS method is robust in predicting the effectiveness cycle of water injection which helps oilfield developers to design the water injection scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most significant research topics in computer vision is object detection. Most of the reported object detection results localise the detected object within a bounding box, but do not explicitly label the edge contours of the object. Since object contours provide a fundamental diagnostic of object shape, some researchers have initiated work on linear contour feature representations for object detection and localisation. However, linear contour feature-based localisation is highly dependent on the performance of linear contour detection within natural images, and this can be perturbed significantly by a cluttered background. In addition, the conventional approach to achieving rotation-invariant features is to rotate the feature receptive field to align with the local dominant orientation before computing the feature representation. Grid resampling after rotation adds extra computational cost and increases the total time consumption for computing the feature descriptor. Though it is not an expensive process if using current computers, it is appreciated that if each step of the implementation is faster to compute especially when the number of local features is increasing and the application is implemented on resource limited ”smart devices”, such as mobile phones, in real-time. Motivated by the above issues, a 2D object localisation system is proposed in this thesis that matches features of edge contour points, which is an alternative method that takes advantage of the shape information for object localisation. This is inspired by edge contour points comprising the basic components of shape contours. In addition, edge point detection is usually simpler to achieve than linear edge contour detection. Therefore, the proposed localization system could avoid the need for linear contour detection and reduce the pathological disruption from the image background. Moreover, since natural images usually comprise many more edge contour points than interest points (i.e. corner points), we also propose new methods to generate rotation-invariant local feature descriptors without pre-rotating the feature receptive field to improve the computational efficiency of the whole system. In detail, the 2D object localisation system is achieved by matching edge contour points features in a constrained search area based on the initial pose-estimate produced by a prior object detection process. The local feature descriptor obtains rotation invariance by making use of rotational symmetry of the hexagonal structure. Therefore, a set of local feature descriptors is proposed based on the hierarchically hexagonal grouping structure. Ultimately, the 2D object localisation system achieves a very promising performance based on matching the proposed features of edge contour points with the mean correct labelling rate of the edge contour points 0.8654 and the mean false labelling rate 0.0314 applied on the data from Amsterdam Library of Object Images (ALOI). Furthermore, the proposed descriptors are evaluated by comparing to the state-of-the-art descriptors and achieve competitive performances in terms of pose estimate with around half-pixel pose error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a fast and precise method to estimate the planar motion of a lidar from consecutive range scans. For every scanned point we formulate the range flow constraint equation in terms of the sensor velocity, and minimize a robust function of the resulting geometric constraints to obtain the motion estimate. Conversely to traditional approaches, this method does not search for correspondences but performs dense scan alignment based on the scan gradients, in the fashion of dense 3D visual odometry. The minimization problem is solved in a coarse-to-fine scheme to cope with large displacements, and a smooth filter based on the covariance of the estimate is employed to handle uncertainty in unconstraint scenarios (e.g. corridors). Simulated and real experiments have been performed to compare our approach with two prominent scan matchers and with wheel odometry. Quantitative and qualitative results demonstrate the superior performance of our approach which, along with its very low computational cost (0.9 milliseconds on a single CPU core), makes it suitable for those robotic applications that require planar odometry. For this purpose, we also provide the code so that the robotics community can benefit from it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Various environmental management systems, standards and tools are being created to assist companies to become more environmental friendly. However, not all the enterprises have adopted environmental policies in the same scale and range. Additionally, there is no existing guide to help them determine their level of environmental responsibility and subsequently, provide support to enable them to move forward towards environmental responsibility excellence. This research proposes the use of a Belief Rule-Based approach to assess an enterprise’s level commitment to environmental issues. The Environmental Responsibility BRB assessment system has been developed for this research. Participating companies will have to complete a structured questionnaire. An automated analysis of their responses (using the Belief Rule-Based approach) will determine their environmental responsibility level. This is followed by a recommendation on how to progress to the next level. The recommended best practices will help promote understanding, increase awareness, and make the organization greener. BRB systems consist of two parts: Knowledge Base and Inference Engine. The knowledge base in this research is constructed after an in-depth literature review, critical analyses of existing environmental performance assessment models and primarily guided by the EU Draft Background Report on "Best Environmental Management Practice in the Telecommunications and ICT Services Sector". The reasoning algorithm of a selected Drools JBoss BRB inference engine is forward chaining, where an inference starts iteratively searching for a pattern-match of the input and if-then clause. However, the forward chaining mechanism is not equipped with uncertainty handling. Therefore, a decision is made to deploy an evidential reasoning and forward chaining with a hybrid knowledge representation inference scheme to accommodate imprecision, ambiguity and fuzzy types of uncertainties. It is believed that such a system generates well balanced, sensible and Green ICT readiness adapted results, to help enterprises focus on making improvements on more sustainable business operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When a task must be executed in a remote or dangerous environment, teleoperation systems may be employed to extend the influence of the human operator. In the case of manipulation tasks, haptic feedback of the forces experienced by the remote (slave) system is often highly useful in improving an operator's ability to perform effectively. In many of these cases (especially teleoperation over the internet and ground-to-space teleoperation), substantial communication latency exists in the control loop and has the strong tendency to cause instability of the system. The first viable solution to this problem in the literature was based on a scattering/wave transformation from transmission line theory. This wave transformation requires the designer to select a wave impedance parameter appropriate to the teleoperation system. It is widely recognized that a small value of wave impedance is well suited to free motion and a large value is preferable for contact tasks. Beyond this basic observation, however, very little guidance exists in the literature regarding the selection of an appropriate value. Moreover, prior research on impedance selection generally fails to account for the fact that in any realistic contact task there will simultaneously exist contact considerations (perpendicular to the surface of contact) and quasi-free-motion considerations (parallel to the surface of contact). The primary contribution of the present work is to introduce an approximate linearized optimum for the choice of wave impedance and to apply this quasi-optimal choice to the Cartesian reality of such a contact task, in which it cannot be expected that a given joint will be either perfectly normal to or perfectly parallel to the motion constraint. The proposed scheme selects a wave impedance matrix that is appropriate to the conditions encountered by the manipulator. This choice may be implemented as a static wave impedance value or as a time-varying choice updated according to the instantaneous conditions encountered. A Lyapunov-like analysis is presented demonstrating that time variation in wave impedance will not violate the passivity of the system. Experimental trials, both in simulation and on a haptic feedback device, are presented validating the technique. Consideration is also given to the case of an uncertain environment, in which an a priori impedance choice may not be possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-cell functional proteomics assays can connect genomic information to biological function through quantitative and multiplex protein measurements. Tools for single-cell proteomics have developed rapidly over the past 5 years and are providing unique opportunities. This thesis describes an emerging microfluidics-based toolkit for single cell functional proteomics, focusing on the development of the single cell barcode chips (SCBCs) with applications in fundamental and translational cancer research.

The microchip designed to simultaneously quantify a panel of secreted, cytoplasmic and membrane proteins from single cells will be discussed at the beginning, which is the prototype for subsequent proteomic microchips with more sophisticated design in preclinical cancer research or clinical applications. The SCBCs are a highly versatile and information rich tool for single-cell functional proteomics. They are based upon isolating individual cells, or defined number of cells, within microchambers, each of which is equipped with a large antibody microarray (the barcode), with between a few hundred to ten thousand microchambers included within a single microchip. Functional proteomics assays at single-cell resolution yield unique pieces of information that significantly shape the way of thinking on cancer research. An in-depth discussion about analysis and interpretation of the unique information such as functional protein fluctuations and protein-protein correlative interactions will follow.

The SCBC is a powerful tool to resolve the functional heterogeneity of cancer cells. It has the capacity to extract a comprehensive picture of the signal transduction network from single tumor cells and thus provides insight into the effect of targeted therapies on protein signaling networks. We will demonstrate this point through applying the SCBCs to investigate three isogenic cell lines of glioblastoma multiforme (GBM).

The cancer cell population is highly heterogeneous with high-amplitude fluctuation at the single cell level, which in turn grants the robustness of the entire population. The concept that a stable population existing in the presence of random fluctuations is reminiscent of many physical systems that are successfully understood using statistical physics. Thus, tools derived from that field can probably be applied to using fluctuations to determine the nature of signaling networks. In the second part of the thesis, we will focus on such a case to use thermodynamics-motivated principles to understand cancer cell hypoxia, where single cell proteomics assays coupled with a quantitative version of Le Chatelier's principle derived from statistical mechanics yield detailed and surprising predictions, which were found to be correct in both cell line and primary tumor model.

The third part of the thesis demonstrates the application of this technology in the preclinical cancer research to study the GBM cancer cell resistance to molecular targeted therapy. Physical approaches to anticipate therapy resistance and to identify effective therapy combinations will be discussed in detail. Our approach is based upon elucidating the signaling coordination within the phosphoprotein signaling pathways that are hyperactivated in human GBMs, and interrogating how that coordination responds to the perturbation of targeted inhibitor. Strongly coupled protein-protein interactions constitute most signaling cascades. A physical analogy of such a system is the strongly coupled atom-atom interactions in a crystal lattice. Similar to decomposing the atomic interactions into a series of independent normal vibrational modes, a simplified picture of signaling network coordination can also be achieved by diagonalizing protein-protein correlation or covariance matrices to decompose the pairwise correlative interactions into a set of distinct linear combinations of signaling proteins (i.e. independent signaling modes). By doing so, two independent signaling modes – one associated with mTOR signaling and a second associated with ERK/Src signaling have been resolved, which in turn allow us to anticipate resistance, and to design combination therapies that are effective, as well as identify those therapies and therapy combinations that will be ineffective. We validated our predictions in mouse tumor models and all predictions were borne out.

In the last part, some preliminary results about the clinical translation of single-cell proteomics chips will be presented. The successful demonstration of our work on human-derived xenografts provides the rationale to extend our current work into the clinic. It will enable us to interrogate GBM tumor samples in a way that could potentially yield a straightforward, rapid interpretation so that we can give therapeutic guidance to the attending physicians within a clinical relevant time scale. The technical challenges of the clinical translation will be presented and our solutions to address the challenges will be discussed as well. A clinical case study will then follow, where some preliminary data collected from a pediatric GBM patient bearing an EGFR amplified tumor will be presented to demonstrate the general protocol and the workflow of the proposed clinical studies.