887 resultados para Redundant Manipulator
Resumo:
Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, the past decade has seen growing interest in quantitative theories of information flow that allow us to quantify the information being leaked. Within these theories, the system is modeled as an information-theoretic channel that specifies the probability of each output, given each input. Given a prior distribution on those inputs, entropy-like measures quantify the amount of information leakage caused by the channel. ^ This thesis presents new results in the theory of min-entropy leakage. First, we study the perspective of secrecy as a resource that is gradually consumed by a system. We explore this intuition through various models of min-entropy consumption. Next, we consider several composition operators that allow smaller systems to be combined into larger systems, and explore the extent to which the leakage of a combined system is constrained by the leakage of its constituents. Most significantly, we prove upper bounds on the leakage of a cascade of two channels, where the output of the first channel is used as input to the second. In addition, we show how to decompose a channel into a cascade of channels. ^ We also establish fundamental new results about the recently-proposed g-leakage family of measures. These results further highlight the significance of channel cascading. We prove that whenever channel A is composition refined by channel B, that is, whenever A is the cascade of B and R for some channel R, the leakage of A never exceeds that of B, regardless of the prior distribution or leakage measure (Shannon leakage, guessing entropy leakage, min-entropy leakage, or g-leakage). Moreover, we show that composition refinement is a partial order if we quotient away channel structure that is redundant with respect to leakage alone. These results are strengthened by the proof that composition refinement is the only way for one channel to never leak more than another with respect to g-leakage. Therefore, composition refinement robustly answers the question of when a channel is always at least as secure as another from a leakage point of view.^
Resumo:
Chapter 1: Patents and Entry Competition in the Pharmaceutical Industry: The Role of Marketing Exclusivity Effective patent length for innovation drugs is severely curtailed because of extensive efficacy and safety tests required for FDA approval, raising concern over adequacy of incentives for new drug development. The Hatch-Waxman Act extends patent length for new drugs by five years, but also promotes generic entry by simplifying approval procedures and granting 180-day marketing exclusivity to a first generic entrant before the patent expires. In this paper we present a dynamic model to examine the effect of marketing exclusivity. We find that marketing exclusivity may be redundant and its removal may increase generic firms' profits and social welfare. Chapter 2: Why Authorized Generics?: Theoretical and Empirical Investigations Facing generic competition, the brand-name companies some-times launch generic versions themselves called authorized generics. This practice is puzzling. If it is cannibalization, it cannot be profitable. If it is divisionalization, it should be practiced always instead of sometimes. I explain this phenomenon in terms of switching costs in a model in which the incumbent first develops a customer base to ready itself against generic competition later. I show that only sufficiently low switching costs or large market size justifies launch of AGs. I then use prescription drug data to test those results and find support. Chapter 3: The Merger Paradox and R&D Oligopoly theory says that merger is unprofitable, unless a majority of firms in industry merge. Here, we introduce R&D opportunities to resolve this so-called merger paradox. We have three results. First, when there is one R&D firm, that firm can profitably merge with any number of non-R&D firms. Second, with multiple R&D firms and multiple non-R&D firms, all R&D firms can profitably merge. Third, with two R&D firms and two non-R&D firms, each R&D firms prefer to merge with a non-R&D firm. With three or more than non-R&D firms, however, the R&D firms prefer to merge with each other.
Resumo:
With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.
Resumo:
A man-machine system called teleoperator system has been developed to work in hazardous environments such as nuclear reactor plants. Force reflection is a type of force feedback in which forces experienced by the remote manipulator are fed back to the manual controller. In a force-reflecting teleoperation system, the operator uses the manual controller to direct the remote manipulator and receives visual information from a video image and/or graphical animation on the computer screen. This thesis presents the design of a portable Force-Reflecting Manual Controller (FRMC) for the teleoperation of tasks such as hazardous material handling, waste cleanup, and space-related operations. The work consists of the design and construction of a prototype 1-Degree-of-Freedom (DOF) FRMC, the development of the Graphical User Interface (GUI), and system integration. Two control strategies - PID and fuzzy logic controllers are developed and experimentally tested. The system response of each is analyzed and evaluated. In addition, the concept of a telesensation system is introduced, and a variety of design alternatives of a 3-DOF FRMC are proposed for future development.
Resumo:
The great amount of data generated as the result of the automation and process supervision in industry implies in two problems: a big demand of storage in discs and the difficulty in streaming this data through a telecommunications link. The lossy data compression algorithms were born in the 90’s with the goal of solving these problems and, by consequence, industries started to use those algorithms in industrial supervision systems to compress data in real time. These algorithms were projected to eliminate redundant and undesired information in a efficient and simple way. However, those algorithms parameters must be set for each process variable, becoming impracticable to configure this parameters for each variable in case of systems that monitor thousands of them. In that context, this paper propose the algorithm Adaptive Swinging Door Trending that consists in a adaptation of the Swinging Door Trending, as this main parameters are adjusted dynamically by the analysis of the signal tendencies in real time. It’s also proposed a comparative analysis of performance in lossy data compression algorithms applied on time series process variables and dynamometer cards. The algorithms used to compare were the piecewise linear and the transforms.
Resumo:
In longitudinal data analysis, our primary interest is in the regression parameters for the marginal expectations of the longitudinal responses; the longitudinal correlation parameters are of secondary interest. The joint likelihood function for longitudinal data is challenging, particularly for correlated discrete outcome data. Marginal modeling approaches such as generalized estimating equations (GEEs) have received much attention in the context of longitudinal regression. These methods are based on the estimates of the first two moments of the data and the working correlation structure. The confidence regions and hypothesis tests are based on the asymptotic normality. The methods are sensitive to misspecification of the variance function and the working correlation structure. Because of such misspecifications, the estimates can be inefficient and inconsistent, and inference may give incorrect results. To overcome this problem, we propose an empirical likelihood (EL) procedure based on a set of estimating equations for the parameter of interest and discuss its characteristics and asymptotic properties. We also provide an algorithm based on EL principles for the estimation of the regression parameters and the construction of a confidence region for the parameter of interest. We extend our approach to variable selection for highdimensional longitudinal data with many covariates. In this situation it is necessary to identify a submodel that adequately represents the data. Including redundant variables may impact the model’s accuracy and efficiency for inference. We propose a penalized empirical likelihood (PEL) variable selection based on GEEs; the variable selection and the estimation of the coefficients are carried out simultaneously. We discuss its characteristics and asymptotic properties, and present an algorithm for optimizing PEL. Simulation studies show that when the model assumptions are correct, our method performs as well as existing methods, and when the model is misspecified, it has clear advantages. We have applied the method to two case examples.
Resumo:
In this Bachelor Thesis I want to provide readers with tools and scripts for the control of a 7DOF manipulator, backed up by some theory of Robotics and Computer Science, in order to better contextualize the work done. In practice, we will see most common software, and developing environments, used to cope with our task: these include ROS, along with visual simulation by VREP and RVIZ, and an almost "stand-alone" ROS extension called MoveIt!, a very complete programming interface for trajectory planning and obstacle avoidance. As we will better appreciate and understand in the introduction chapter, the capability of detecting collision objects through a camera sensor, and re-plan to the desired end-effector pose, are not enough. In fact, this work is implemented in a more complex system, where recognition of particular objects is needed. Through a package of ROS and customized scripts, a detailed procedure will be provided on how to distinguish a particular object, retrieve its reference frame with respect to a known one, and then allow navigation to that target. Together with technical details, the aim is also to report working scripts and a specific appendix (A) you can refer to, if desiring to put things together.
Resumo:
Firm’s financial information is essential to stakeholders’ decision making. Although not always financial statements show the firm’s real image. This study examines listed firms from Portugal and UK. Firms have different purposes to manipulate earnings: some strive for influencing investors’ perception about a particular company, some try to provide better position for gaining finance from credit institutions or paying less tax to tax authorities. Usually, this behaviour is induced when firms have financial problems. Consequently, the study also aims to see the impact of financial crisis on earnings management. We try to answer question how does extent of firms’ involvement in earnings management change when the world undergoes financial crisis. Furthermore, we also compare two countries with different legal forces in terms of quality of accounting to see the main differences. We used a panel data methodology to analyse financial data from 2004 till 2014 of listed firms from Portugal and UK. Beneish (1999) model was applied to categorize manipulator and non-manipulator firms. Analysing accounting information according to Beneish’s ratios, findings suggest that financial crisis had certain impact on firms’ tendency to manipulate financial results in UK although it is not statistically significant. Moreover, besides the differences between Portugal and UK, results contradict the common view of legal systems’ quality, as UK firms tend to apply more accounting techniques for manipulation than the Portuguese ones. Our main results also confirm that some UK firms manipulate ratios of receivables’ days, asset quality index, depreciation index, leverage, sales and general administrative expenses whereas Portuguese firms manipulate only receivables’ days. Finally, we also find that the main reason to manipulate results is not to influence the cost of obtained funds neither to minimize tax burden since net profit does not explain the ratios used in the Beneish model. Results suggest that the main concern to listed firms manipulate results is to influence financial investors perception.
Resumo:
This paper deals with the monolithic decoupled XYZ compliant parallel mechanisms (CPMs) for multi-function applications, which can be fabricated monolithically without assembly and has the capability of kinetostatic decoupling. At first, the conceptual design of monolithic decoupled XYZ CPMs is presented using identical spatial compliant multi-beam modules based on a decoupled 3-PPPR parallel kinematic mechanism. Three types of applications: motion/positioning stages, force/acceleration sensors and energy harvesting devices are described in principle. The kinetostatic and dynamic modelling is then conducted to capture the displacements of any stage under loads acting at any stage and the natural frequency with the comparisons with FEA results. Finally, performance characteristics analysis for motion stage applications is detailed investigated to show how the change of the geometrical parameter can affect the performance characteristics, which provides initial optimal estimations. Results show that the smaller thickness of beams and larger dimension of cubic stages can improve the performance characteristics excluding natural frequency under allowable conditions. In order to improve the natural frequency characteristic, a stiffness-enhanced monolithic decoupled configuration that is achieved through employing more beams in the spatial modules or reducing the mass of each cubic stage mass can be adopted. In addition, an isotropic variation with different motion range along each axis and same payload in each leg is proposed. The redundant design for monolithic fabrication is introduced in this paper, which can overcome the drawback of monolithic fabrication that the failed compliant beam is difficult to replace, and extend the CPM’s life.
Resumo:
Natural IgM (nIgM) is constitutively present in the serum, where it aids in the early control of viral and bacterial expansions. nIgM also plays a significant role in the prevention of autoimmune disease by promoting the clearance of cellular debris. However, the cells that maintain high titers of nIgM in the circulation had not yet been identified. Several studies have linked serum nIgM with the presence of fetal-lineage B cells, and others have detected IgM secretion directly by B1a cells in various tissues. Nevertheless, a substantial contribution of undifferentiated B1 cells to nIgM titers is doubtful, as the ability to produce large quantities of antibody (Ab) is a function of the phenotype and morphology of differentiated plasma cells (PCs). No direct evidence exists to support the claim that a B1-cell population directly produces the bulk of circulating nIgM. The source of nIgM thus remained uncertain and unstudied.
In the first part of this study, I identified the primary source of nIgM. Using enzyme-linked immunosorbent spot (ELISPOT) assay, I determined that the majority of IgM Ab-secreting cells (ASCs) in naïve mice reside in the bone marrow (BM). Flow cytometric analysis of BM cells stained for intracellular IgM revealed that nIgM ASCs express IgM and the PC marker CD138 on their surface, but not the B1a cell marker CD5. By spinning these cells onto slides and staining them, following isolation by fluorescence-activated cell sorting (FACS), I found that they exhibit the typical morphological characteristics of terminally differentiated PCs. Transfer experiments demonstrated that BM nIgM PCs arise from a progenitor in the peritoneal cavity (PerC), but not isolated PerC B1a, B1b, or B2 cells. Immunoglobulin (Ig) gene sequence analysis and examination of B1-8i mice, which carry an Ig knockin that prohibits fetal B-cell development, indicated that nIgM PCs differentiate from fetal-lineage B cells. BrdU uptake experiments showed that the nIgM ASC compartment contains a substantial fraction of long-lived plasma cells (LLPCs). Finally, I demonstrated that nIgM PCs occupy a survival niche distinct from that used by IgG PCs.
In the second part of this dissertation, I characterized the unique survival niche of nIgM LLPCs, which maintain constitutive high titers of nIgM in the serum. By using genetically deficient or Ab-depleted mice, I found that neither T cells, type 2 innate lymphoid cells, nor mast cells, the three major hematopoietic producers of IL-5, were required for nIgM PC survival in the BM. However, IgM PCs associate strongly with IL-5-expressing BM stromal cells, which support their survival in vitro when stimulated. In vivo neutralization of IL-5 revealed that, like individual survival factors for IgG PCs, IL-5 is not the sole supporter of IgM PCs, but is likely one of several redundant molecules that together ensure uninterrupted signaling. Thus, the long-lived nIgM PC niche is not composed of hematopoietic sources of IL-5, but a stromal cell microenvironment that provides multiple redundant survival signals.
In the final part of my study, I identified and characterized the precursor of nIgM PCs, which I found in the first project to be resident in the PerC, but not a B1a, B1b, or B2 cell. By transferring PerC cells sorted based on expression of CD19, CD5, and CD11b, I found that only the CD19+CD5+CD11b- population contained cells capable of differentiating into nIgM PCs. Transfer of decreasing numbers of unfractionated PerC cells into Rag1 knockouts revealed an order-of-magnitude drop in the rate of serum IgM reconstitution between stochastically sampled pools of 106 and 3x105 PerC cells, suggesting that the CD19+CD5+CD11b- compartment comprises two cell types, and that interaction between the two necessary for nIgM-PC differentiation. By transferring neonatal liver, I determined that the early hematopoietic environment is required for nIgM PC precursors to develop. Using mice carrying a mutation that disturbs cKit expression, I also found that cKit appears to be required at a critical point near birth for the proper development of nIgM PC precursors.
The collective results of these studies demonstrate that nIgM is the product of BM-resident PCs, which differentiate from a PerC B cell precursor distinct from B1a cells, and survive long-term in a unique survival niche created by stromal cells. My work creates a new paradigm by which to understand nIgM, B1 cell, and PC biology.
Resumo:
Bud formation by Saccharomyces cerevisiae is a fundamental process for yeast proliferation. Bud emergence is initiated by the polarization of the cytoskeleton, leading to local secretory vesicle delivery and gulcan synthase activity. The master regulator of polarity establishment is a small Rho-family GTPase – Cdc42. Cdc42 forms a clustered patch at the incipient budding site in late G1 and mediates downstream events which lead to bud emergence. Cdc42 promotes morphogenesis via its various effectors. PAKs (p21-activated kinases) are important Cdc42 effectors which mediate actin cytoskeleton polarization and septin filament assembly. The PAKs Cla4 and Ste20 share common binding domains for GTP-Cdc42 and they are partially redundant in function. However, we found that Cla4 and Ste20 behaved differently during the polarization and this depended on their different membrane interaction domains. Also, Cla4 and Ste20 compete for a limited number of binding sites at the polarity patch during bud emergence. These results suggest that PAKs may be differentially regulated during polarity establishment.
Morphogenesis of yeast must be coordinated with the nuclear cycle to enable successful proliferation. Many environmental stresses temporarily disrupt bud formation, and in such circumstances, the morphogenesis checkpoint halts nuclear division until bud formation can resume. Bud emergence is essential for degradation of the mitotic inhibitor, Swe1. Swe1 is localized to the septin cytoskeleton at the bud neck by the Swe1-binding protein Hsl7. Neck localization of Swe1 is required for Swe1 degradation. Although septins form a ring at the presumptive bud site prior to bud emergence, Hsl7 is not recruited to the septins until after bud emergence, suggesting that septins and/or Hsl7 respond to a “bud sensor”. Here we show that recruitment of Hsl7 to the septin ring depends on a combination of two septin-binding kinases: Hsl1 and Elm1. We elucidate which domains of these kinases are needed, and show that artificial targeting of those domains suffices to recruit Hsl7 to septin rings even in unbudded cells. Moreover, recruitment of Elm1 is responsive to bud emergence. Our findings suggest that Elm1 plays a key role in sensing bud emergence.
Resumo:
Rolling Isolation Systems provide a simple and effective means for protecting components from horizontal floor vibrations. In these systems a platform rolls on four steel balls which, in turn, rest within shallow bowls. The trajectories of the balls is uniquely determined by the horizontal and rotational velocity components of the rolling platform, and thus provides nonholonomic constraints. In general, the bowls are not parabolic, so the potential energy function of this system is not quadratic. This thesis presents the application of Gauss's Principle of Least Constraint to the modeling of rolling isolation platforms. The equations of motion are described in terms of a redundant set of constrained coordinates. Coordinate accelerations are uniquely determined at any point in time via Gauss's Principle by solving a linearly constrained quadratic minimization. In the absence of any modeled damping, the equations of motion conserve energy. This mathematical model is then used to find the bowl profile that minimizes response acceleration subject to displacement constraint.
Resumo:
This paper introduces a screw theory based method termed constraint and position identification (CPI) approach to synthesize decoupled spatial translational compliant parallel manipulators (XYZ CPMs) with consideration of actuation isolation. The proposed approach is based on a systematic arrangement of rigid stages and compliant modules in a three-legged XYZ CPM system using the constraint spaces and the position spaces of the compliant modules. The constraint spaces and the position spaces are firstly derived based on the screw theory instead of using the rigid-body mechanism design experience. Additionally, the constraint spaces are classified into different constraint combinations, with typical position spaces depicted via geometric entities. Furthermore, the systematic synthesis process based on the constraint combinations and the geometric entities is demonstrated via several examples. Finally, several novel decoupled XYZ CPMs with monolithic configurations are created and verified by finite elements analysis. The present CPI approach enables experts and beginners to synthesize a variety of decoupled XYZ CPMs with consideration of actuation isolation by selecting an appropriate constraint and an optimal position for each of the compliant modules according to a specific application.
Resumo:
Mitotic genome instability can occur during the repair of double-strand breaks (DSBs) in DNA, which arise from endogenous and exogenous sources. Studying the mechanisms of DNA repair in the budding yeast, Saccharomyces cerevisiae has shown that Homologous Recombination (HR) is a vital repair mechanism for DSBs. HR can result in a crossover event, in which the broken molecule reciprocally exchanges information with a homologous repair template. The current model of double-strand break repair (DSBR) also allows for a tract of information to non-reciprocally transfer from the template molecule to the broken molecule. These “gene conversion” events can vary in size and can occur in conjunction with a crossover event or in isolation. The frequency and size of gene conversions in isolation and gene conversions associated with crossing over has been a source of debate due to the variation in systems used to detect gene conversions and the context in which the gene conversions are measured.
In Chapter 2, I use an unbiased system that measures the frequency and size of gene conversion events, as well as the association of gene conversion events with crossing over between homologs in diploid yeast. We show mitotic gene conversions occur at a rate of 1.3x10-6 per cell division, are either large (median 54.0kb) or small (median 6.4kb), and are associated with crossing over 43% of the time.
DSBs can arise from endogenous cellular processes such as replication and transcription. Two important RNA/DNA hybrids are involved in replication and transcription: R-loops, which form when an RNA transcript base pairs with the DNA template and displaces the non-template DNA strand, and ribonucleotides embedded into DNA (rNMPs), which arise when replicative polymerase errors insert ribonucleotide instead of deoxyribonucleotide triphosphates. RNaseH1 (encoded by RNH1) and RNaseH2 (whose catalytic subunit is encoded by RNH201) both recognize and degrade the RNA in within R-loops while RNaseH2 alone recognizes, nicks, and initiates removal of rNMPs embedded into DNA. Due to their redundant abilities to act on RNA:DNA hybrids, aberrant removal of rNMPs from DNA has been thought to lead to genome instability in an rnh201Δ background.
In Chapter 3, I characterize (1) non-selective genome-wide homologous recombination events and (2) crossing over on chromosome IV in mutants defective in RNaseH1, RNaseH2, or RNaseH1 and RNaseH2. Using a mutant DNA polymerase that incorporates 4-fold fewer rNMPs than wild type, I demonstrate that the primary recombinogenic lesion in the RNaseH2-defective genome is not rNMPs, but rather R-loops. This work suggests different in-vivo roles for RNaseH1 and RNaseH2 in resolving R-loops in yeast and is consistent with R-loops, not rNMPs, being the the likely source of pathology in Aicardi-Goutières Syndrome patients defective in RNaseH2.
Resumo:
Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.