930 resultados para Weakly Compact Sets
Resumo:
The basic goal of this study is to extend old and propose new ways to generate knapsack sets suitable for use in public key cryptography. The knapsack problem and its cryptographic use are reviewed in the introductory chapter. Terminology is based on common cryptographic vocabulary. For example, solving the knapsack problem (which is here a subset sum problem) is termed decipherment. Chapter 1 also reviews the most famous knapsack cryptosystem, the Merkle Hellman system. It is based on a superincreasing knapsack and uses modular multiplication as a trapdoor transformation. The insecurity caused by these two properties exemplifies the two general categories of attacks against knapsack systems. These categories provide the motivation for Chapters 2 and 4. Chapter 2 discusses the density of a knapsack and the dangers of having a low density. Chapter 3 interrupts for a while the more abstract treatment by showing examples of small injective knapsacks and extrapolating conjectures on some characteristics of knapsacks of larger size, especially their density and number. The most common trapdoor technique, modular multiplication, is likely to cause insecurity, but as argued in Chapter 4, it is difficult to find any other simple trapdoor techniques. This discussion also provides a basis for the introduction of various categories of non injectivity in Chapter 5. Besides general ideas of non injectivity of knapsack systems, Chapter 5 introduces and evaluates several ways to construct such systems, most notably the "exceptional blocks" in superincreasing knapsacks and the usage of "too small" a modulus in the modular multiplication as a trapdoor technique. The author believes that non injectivity is the most promising direction for development of knapsack cryptosystema. Chapter 6 modifies two well known knapsack schemes, the Merkle Hellman multiplicative trapdoor knapsack and the Graham Shamir knapsack. The main interest is in aspects other than non injectivity, although that is also exploited. In the end of the chapter, constructions proposed by Desmedt et. al. are presented to serve as a comparison for the developments of the subsequent three chapters. Chapter 7 provides a general framework for the iterative construction of injective knapsacks from smaller knapsacks, together with a simple example, the "three elements" system. In Chapters 8 and 9 the general framework is put into practice in two different ways. Modularly injective small knapsacks are used in Chapter 9 to construct a large knapsack, which is called the congruential knapsack. The addends of a subset sum can be found by decrementing the sum iteratively by using each of the small knapsacks and their moduli in turn. The construction is also generalized to the non injective case, which can lead to especially good results in the density, without complicating the deciphering process too much. Chapter 9 presents three related ways to realize the general framework of Chapter 7. The main idea is to join iteratively small knapsacks, each element of which would satisfy the superincreasing condition. As a whole, none of these systems need become superincreasing, though the development of density is not better than that. The new knapsack systems are injective but they can be deciphered with the same searching method as the non injective knapsacks with the "exceptional blocks" in Chapter 5. The final Chapter 10 first reviews the Chor Rivest knapsack system, which has withstood all cryptanalytic attacks. A couple of modifications to the use of this system are presented in order to further increase the security or make the construction easier. The latter goal is attempted by reducing the size of the Chor Rivest knapsack embedded in the modified system. '
Resumo:
We introduce a method for surface reconstruction from point sets that is able to cope with noise and outliers. First, a splat-based representation is computed from the point set. A robust local 3D RANSAC-based procedure is used to filter the point set for outliers, then a local jet surface - a low-degree surface approximation - is fitted to the inliers. Second, we extract the reconstructed surface in the form of a surface triangle mesh through Delaunay refinement. The Delaunay refinement meshing approach requires computing intersections between line segment queries and the surface to be meshed. In the present case, intersection queries are solved from the set of splats through a 1D RANSAC procedure
Resumo:
We present a participant study that compares biological data exploration tasks using volume renderings of laser confocal microscopy data across three environments that vary in level of immersion: a desktop, fishtank, and cave system. For the tasks, data, and visualization approach used in our study, we found that subjects qualitatively preferred and quantitatively performed better in the cave compared with the fishtank and desktop. Subjects performed real-world biological data analysis tasks that emphasized understanding spatial relationships including characterizing the general features in a volume, identifying colocated features, and reporting geometric relationships such as whether clusters of cells were coplanar. After analyzing data in each environment, subjects were asked to choose which environment they wanted to analyze additional data sets in - subjects uniformly selected the cave environment.
Resumo:
In this paper I discuss the intuition behind Frege's and Russell's definitions of numbers as sets, as well as Benacerraf's criticism of it. I argue that Benacerraf's argument is not as strong as some philosophers tend to think. Moreover, I examine an alternative to the Fregean-Russellian definition of numbers proposed by Maddy, and point out some problems faced by it.
Resumo:
The goal of this study was to develop a fuzzy model to predict the occupancy rate of free-stalls facilities of dairy cattle, aiding to optimize the design of projects. The following input variables were defined for the development of the fuzzy system: dry bulb temperature (Tdb, °C), wet bulb temperature (Twb, °C) and black globe temperature (Tbg, °C). Based on the input variables, the fuzzy system predicts the occupancy rate (OR, %) of dairy cattle in free-stall barns. For the model validation, data collecting were conducted on the facilities of the Intensive System of Milk Production (SIPL), in the Dairy Cattle National Research Center (CNPGL) of Embrapa. The OR values, estimated by the fuzzy system, presented values of average standard deviation of 3.93%, indicating low rate of errors in the simulation. Simulated and measured results were statistically equal (P>0.05, t Test). After validating the proposed model, the average percentage of correct answers for the simulated data was 89.7%. Therefore, the fuzzy system developed for the occupancy rate prediction of free-stalls facilities for dairy cattle allowed a realistic prediction of stalls occupancy rate, allowing the planning and design of free-stall barns.
Resumo:
The European Organization for Nuclear Research (CERN) operates the largest particle collider in the world. This particle collider is called the Large Hadron Collider (LHC) and it will undergo a maintenance break sometime in 2017 or 2018. During the break, the particle detectors, which operate around the particle collider, will be serviced and upgraded. Following the improvement in performance of the particle collider, the requirements for the detector electronics will be more demanding. In particular, the high amount of radiation during the operation of the particle collider sets requirements for the electronics that are uncommon in commercial electronics. Electronics that are built to function in the challenging environment of the collider have been designed at CERN. In order to meet the future challenges of data transmission, a GigaBit Transceiver data transmission module and an E-Link data bus have been developed. The next generation of readout electronics is designed to benefit from these technologies. However, the current readout electronics chips are not compatible with these technologies. As a result, in addition to new Gas Electron Multiplier (GEM) detectors and other technology, a new compatible chip is developed to function within the GEMs for the Compact Muon Solenoid (CMS) project. In this thesis, the objective was to study a data transmission interface that will be located on the readout chip between the E-Link bus and the control logic of the chip. The function of the module is to handle data transmission between the chip and the E-Link. In the study, a model of the interface was implemented with the Verilog hardware description language. This process was simulated by using chip design software by Cadence. State machines and operating principles with alternative possibilities for implementation are introduced in the E-Link interface design procedure. The functionality of the designed logic is demonstrated in simulation results, in which the implemented model is proven to be suitable for its task. Finally, suggestions that should be considered for improving the design have been presented.
Resumo:
Suomalaisten ja saksalaisten arkikeskustelujen välillä on sekä yhtäläisyyksiä että eroja. Tässä saksalaisen filologian alaan kuuluvassa tutkimuksessa tarkastellaan yhtä keskeistä arkikeskustelun toimintoa, puhelinkeskustelun lopetusta, suomen- ja saksanpuhujien tuottamana. Aineistona on käytetty suomen- ja saksankielisten äidinkielisten puhujien tätä tutkimusta varten nauhoittamia henkilökohtaisia luonnollisia puhelinkeskusteluja. Aineistoon valikoitui 12 suomalaista ja 12 saksalaista puhelua. Nauhoitteiden käyttöön on saatu asianmukainen lupa kaikilta osapuolilta. Puhelut on litteroitu saksalaisella kielialueella vakiintuneen GAT-litterointisysteemin mukaan. Teoreettis-metodisena kehyksenä on kaksi tutkimusalaa, vuorovaikutuslingvistiikka ja kielten vertailu. Vuorovaikutuslingvistinen tarkastelu keskittyy havaintoihin vuorojen ja puheen sekvenssien rakenteesta. Vuorojen merkitysten tulkinnassa hyödynnetään systemaattisesti prosodian antamia vihjeitä. Tuloksena on yksittäisten lopetusten keskustelunanalyyttinen lähikuvaus, jonka pohjalta määritellään kulloisenkin lopetuksen sekvenssirakenne. Kaikki lopetukset olivat siltä osin yhteneväisiä, että niissä kaikissa havaittiin ainakin aloittava, tulevaan tapaamiseen viittaava sekä lopputervehdyksiin johtava sekvenssi. Sekvenssirakenteen variaatioiden pohjalta aineiston lopetukset voidaan kuitenkin jaotella ryhmiin. Sekä suomen- että saksankielisessä aineistossa havaittiin kolmentyyppisiä lopetuksia: kompakteja, komplekseja ja keskeytettyjä lopetuksia. Ryhmittely kolmeen tyyppiin on avuksi seuraavassa kuvausvaiheessa, jossa verrataan suomen- ja saksankielisiä lopetuksia toisiinsa. Samanaikaisesti kun tutkimus valottaa kohtia, joissa kaksi aineistosettiä yhtenevät ja eroavat, se myös esittää, mitkä vuorovaikutuksen tasot soveltuvat kieltenvälisen vertailun kohteiksi. Pohdintaa siitä, mitä vuorovaikutuksen tasoja kieltenväliseen vertailuun voidaan sisällyttää, onkin toistaiseksi esitetty verrattain vähän. Työ siis rakentaa siltaa vuorovaikutuslingvistisen ja kontrastiivisen kielitieteen välille.
Resumo:
The present study compares the performance of stochastic and fuzzy models for the analysis of the relationship between clinical signs and diagnosis. Data obtained for 153 children concerning diagnosis (pneumonia, other non-pneumonia diseases, absence of disease) and seven clinical signs were divided into two samples, one for analysis and other for validation. The former was used to derive relations by multi-discriminant analysis (MDA) and by fuzzy max-min compositions (fuzzy), and the latter was used to assess the predictions drawn from each type of relation. MDA and fuzzy were closely similar in terms of prediction, with correct allocation of 75.7 to 78.3% of patients in the validation sample, and displaying only a single instance of disagreement: a patient with low level of toxemia was mistaken as not diseased by MDA and correctly taken as somehow ill by fuzzy. Concerning relations, each method provided different information, each revealing different aspects of the relations between clinical signs and diagnoses. Both methods agreed on pointing X-ray, dyspnea, and auscultation as better related with pneumonia, but only fuzzy was able to detect relations of heart rate, body temperature, toxemia and respiratory rate with pneumonia. Moreover, only fuzzy was able to detect a relationship between heart rate and absence of disease, which allowed the detection of six malnourished children whose diagnoses as healthy are, indeed, disputable. The conclusion is that even though fuzzy sets theory might not improve prediction, it certainly does enhance clinical knowledge since it detects relationships not visible to stochastic models.
Resumo:
Coronary artery disease (CAD) is a worldwide leading cause of death. The standard method for evaluating critical partial occlusions is coronary arteriography, a catheterization technique which is invasive, time consuming, and costly. There are noninvasive approaches for the early detection of CAD. The basis for the noninvasive diagnosis of CAD has been laid in a sequential analysis of the risk factors, and the results of the treadmill test and myocardial perfusion scintigraphy (MPS). Many investigators have demonstrated that the diagnostic applications of MPS are appropriate for patients who have an intermediate likelihood of disease. Although this information is useful, it is only partially utilized in clinical practice due to the difficulty to properly classify the patients. Since the seminal work of Lotfi Zadeh, fuzzy logic has been applied in numerous areas. In the present study, we proposed and tested a model to select patients for MPS based on fuzzy sets theory. A group of 1053 patients was used to develop the model and another group of 1045 patients was used to test it. Receiver operating characteristic curves were used to compare the performance of the fuzzy model against expert physician opinions, and showed that the performance of the fuzzy model was equal or superior to that of the physicians. Therefore, we conclude that the fuzzy model could be a useful tool to assist the general practitioner in the selection of patients for MPS.
Resumo:
Findings by our group have shown that the dorsolateral telencephalon of Gymnotus carapo sends efferents to the mesencephalic torus semicircularis dorsalis (TSd) and that presumably this connection is involved in the changes in electric organ discharge (EOD) and in skeletomotor responses observed following microinjections of GABA A antagonist bicuculline into this telencephalic region. Other studies have implicated the TSd or its mammalian homologue, the inferior colliculus, in defensive responses. In the present study, we explore the possible involvement of the TSd and of the GABA-ergic system in the modulation of the electric and skeletomotor displays. For this purpose, different doses of bicuculline (0.98, 0.49, 0.245, and 0.015 mM) and muscimol (15.35 mM) were microinjected (0.1 µL) in the TSd of the awake G. carapo. Microinjection of bicuculline induced dose-dependent interruptions of EOD and increased skeletomotor activity resembling defense displays. The effects of the two highest doses showed maximum values at 5 min (4.3 ± 2.7 and 3.8 ± 2.0 Hz, P < 0.05) and persisted until 10 min (11 ± 5.7 and 8.7 ± 5.2 Hz, P < 0.05). Microinjections of muscimol were ineffective. During the interruptions of EOD, the novelty response (increased frequency in response to sensory novelties) induced by an electric stimulus delivered by a pair of electrodes placed in the water of the experimental cuvette was reduced or abolished. These data suggest that the GABA-ergic mechanisms of the TSd inhibit the neural substrate of the defense reaction at this midbrain level.
Resumo:
Vapaakappalekartuntaan perustuva tilasto Suomessa julkaistuista dia-, kalvo- ja filmikorttisarjoista vuodesta 1991 lähtien
Resumo:
Rough Set Data Analysis (RSDA) is a non-invasive data analysis approach that solely relies on the data to find patterns and decision rules. Despite its noninvasive approach and ability to generate human readable rules, classical RSDA has not been successfully used in commercial data mining and rule generating engines. The reason is its scalability. Classical RSDA slows down a great deal with the larger data sets and takes much longer times to generate the rules. This research is aimed to address the issue of scalability in rough sets by improving the performance of the attribute reduction step of the classical RSDA - which is the root cause of its slow performance. We propose to move the entire attribute reduction process into the database. We defined a new schema to store the initial data set. We then defined SOL queries on this new schema to find the attribute reducts correctly and faster than the traditional RSDA approach. We tested our technique on two typical data sets and compared our results with the traditional RSDA approach for attribute reduction. In the end we also highlighted some of the issues with our proposed approach which could lead to future research.
Resumo:
We provide a survey of the literature on ranking sets of objects. The interpretations of those set rankings include those employed in the theory of choice under complete uncertainty, rankings of opportunity sets, set rankings that appear in matching theory, and the structure of assembly preferences. The survey is prepared for the Handbook of Utility Theory, vol. 2, edited by Salvador Barberà, Peter Hammond, and Christian Seidl, to be published by Kluwer Academic Publishers. The chapter number is provisional.