983 resultados para File sharing applications
Resumo:
Modern power networks incorporate communications and information technology infrastructure into the electrical power system to create a smart grid in terms of control and operation. The smart grid enables real-time communication and control between consumers and utility companies allowing suppliers to optimize energy usage based on price preference and system technical issues. The smart grid design aims to provide overall power system monitoring, create protection and control strategies to maintain system performance, stability and security. This dissertation contributed to the development of a unique and novel smart grid test-bed laboratory with integrated monitoring, protection and control systems. This test-bed was used as a platform to test the smart grid operational ideas developed here. The implementation of this system in the real-time software creates an environment for studying, implementing and verifying novel control and protection schemes developed in this dissertation. Phasor measurement techniques were developed using the available Data Acquisition (DAQ) devices in order to monitor all points in the power system in real time. This provides a practical view of system parameter changes, system abnormal conditions and its stability and security information system. These developments provide valuable measurements for technical power system operators in the energy control centers. Phasor Measurement technology is an excellent solution for improving system planning, operation and energy trading in addition to enabling advanced applications in Wide Area Monitoring, Protection and Control (WAMPAC). Moreover, a virtual protection system was developed and implemented in the smart grid laboratory with integrated functionality for wide area applications. Experiments and procedures were developed in the system in order to detect the system abnormal conditions and apply proper remedies to heal the system. A design for DC microgrid was developed to integrate it to the AC system with appropriate control capability. This system represents realistic hybrid AC/DC microgrids connectivity to the AC side to study the use of such architecture in system operation to help remedy system abnormal conditions. In addition, this dissertation explored the challenges and feasibility of the implementation of real-time system analysis features in order to monitor the system security and stability measures. These indices are measured experimentally during the operation of the developed hybrid AC/DC microgrids. Furthermore, a real-time optimal power flow system was implemented to optimally manage the power sharing between AC generators and DC side resources. A study relating to real-time energy management algorithm in hybrid microgrids was performed to evaluate the effects of using energy storage resources and their use in mitigating heavy load impacts on system stability and operational security.
Resumo:
A comprehensive user model, built by monitoring a user's current use of applications, can be an excellent starting point for building adaptive user-centred applications. The BaranC framework monitors all user interaction with a digital device (e.g. smartphone), and also collects all available context data (such as from sensors in the digital device itself, in a smart watch, or in smart appliances) in order to build a full model of user application behaviour. The model built from the collected data, called the UDI (User Digital Imprint), is further augmented by analysis services, for example, a service to produce activity profiles from smartphone sensor data. The enhanced UDI model can then be the basis for building an appropriate adaptive application that is user-centred as it is based on an individual user model. As BaranC supports continuous user monitoring, an application can be dynamically adaptive in real-time to the current context (e.g. time, location or activity). Furthermore, since BaranC is continuously augmenting the user model with more monitored data, over time the user model changes, and the adaptive application can adapt gradually over time to changing user behaviour patterns. BaranC has been implemented as a service-oriented framework where the collection of data for the UDI and all sharing of the UDI data are kept strictly under the user's control. In addition, being service-oriented allows (with the user's permission) its monitoring and analysis services to be easily used by 3rd parties in order to provide 3rd party adaptive assistant services. An example 3rd party service demonstrator, built on top of BaranC, proactively assists a user by dynamic predication, based on the current context, what apps and contacts the user is likely to need. BaranC introduces an innovative user-controlled unified service model of monitoring and use of personal digital activity data in order to provide adaptive user-centred applications. This aims to improve on the current situation where the diversity of adaptive applications results in a proliferation of applications monitoring and using personal data, resulting in a lack of clarity, a dispersal of data, and a diminution of user control.
Resumo:
Chaque année, le piratage mondial de la musique coûte plusieurs milliards de dollars en pertes économiques, pertes d’emplois et pertes de gains des travailleurs ainsi que la perte de millions de dollars en recettes fiscales. La plupart du piratage de la musique est dû à la croissance rapide et à la facilité des technologies actuelles pour la copie, le partage, la manipulation et la distribution de données musicales [Domingo, 2015], [Siwek, 2007]. Le tatouage des signaux sonores a été proposé pour protéger les droit des auteurs et pour permettre la localisation des instants où le signal sonore a été falsifié. Dans cette thèse, nous proposons d’utiliser la représentation parcimonieuse bio-inspirée par graphe de décharges (spikegramme), pour concevoir une nouvelle méthode permettant la localisation de la falsification dans les signaux sonores. Aussi, une nouvelle méthode de protection du droit d’auteur. Finalement, une nouvelle attaque perceptuelle, en utilisant le spikegramme, pour attaquer des systèmes de tatouage sonore. Nous proposons tout d’abord une technique de localisation des falsifications (‘tampering’) des signaux sonores. Pour cela nous combinons une méthode à spectre étendu modifié (‘modified spread spectrum’, MSS) avec une représentation parcimonieuse. Nous utilisons une technique de poursuite perceptive adaptée (perceptual marching pursuit, PMP [Hossein Najaf-Zadeh, 2008]) pour générer une représentation parcimonieuse (spikegramme) du signal sonore d’entrée qui est invariante au décalage temporel [E. C. Smith, 2006] et qui prend en compte les phénomènes de masquage tels qu’ils sont observés en audition. Un code d’authentification est inséré à l’intérieur des coefficients de la représentation en spikegramme. Puis ceux-ci sont combinés aux seuils de masquage. Le signal tatoué est resynthétisé à partir des coefficients modifiés, et le signal ainsi obtenu est transmis au décodeur. Au décodeur, pour identifier un segment falsifié du signal sonore, les codes d’authentification de tous les segments intacts sont analysés. Si les codes ne peuvent être détectés correctement, on sait qu’alors le segment aura été falsifié. Nous proposons de tatouer selon le principe à spectre étendu (appelé MSS) afin d’obtenir une grande capacité en nombre de bits de tatouage introduits. Dans les situations où il y a désynchronisation entre le codeur et le décodeur, notre méthode permet quand même de détecter des pièces falsifiées. Par rapport à l’état de l’art, notre approche a le taux d’erreur le plus bas pour ce qui est de détecter les pièces falsifiées. Nous avons utilisé le test de l’opinion moyenne (‘MOS’) pour mesurer la qualité des systèmes tatoués. Nous évaluons la méthode de tatouage semi-fragile par le taux d’erreur (nombre de bits erronés divisé par tous les bits soumis) suite à plusieurs attaques. Les résultats confirment la supériorité de notre approche pour la localisation des pièces falsifiées dans les signaux sonores tout en préservant la qualité des signaux. Ensuite nous proposons une nouvelle technique pour la protection des signaux sonores. Cette technique est basée sur la représentation par spikegrammes des signaux sonores et utilise deux dictionnaires (TDA pour Two-Dictionary Approach). Le spikegramme est utilisé pour coder le signal hôte en utilisant un dictionnaire de filtres gammatones. Pour le tatouage, nous utilisons deux dictionnaires différents qui sont sélectionnés en fonction du bit d’entrée à tatouer et du contenu du signal. Notre approche trouve les gammatones appropriés (appelés noyaux de tatouage) sur la base de la valeur du bit à tatouer, et incorpore les bits de tatouage dans la phase des gammatones du tatouage. De plus, il est montré que la TDA est libre d’erreur dans le cas d’aucune situation d’attaque. Il est démontré que la décorrélation des noyaux de tatouage permet la conception d’une méthode de tatouage sonore très robuste. Les expériences ont montré la meilleure robustesse pour la méthode proposée lorsque le signal tatoué est corrompu par une compression MP3 à 32 kbits par seconde avec une charge utile de 56.5 bps par rapport à plusieurs techniques récentes. De plus nous avons étudié la robustesse du tatouage lorsque les nouveaux codec USAC (Unified Audion and Speech Coding) à 24kbps sont utilisés. La charge utile est alors comprise entre 5 et 15 bps. Finalement, nous utilisons les spikegrammes pour proposer trois nouvelles méthodes d’attaques. Nous les comparons aux méthodes récentes d’attaques telles que 32 kbps MP3 et 24 kbps USAC. Ces attaques comprennent l’attaque par PMP, l’attaque par bruit inaudible et l’attaque de remplacement parcimonieuse. Dans le cas de l’attaque par PMP, le signal de tatouage est représenté et resynthétisé avec un spikegramme. Dans le cas de l’attaque par bruit inaudible, celui-ci est généré et ajouté aux coefficients du spikegramme. Dans le cas de l’attaque de remplacement parcimonieuse, dans chaque segment du signal, les caractéristiques spectro-temporelles du signal (les décharges temporelles ;‘time spikes’) se trouvent en utilisant le spikegramme et les spikes temporelles et similaires sont remplacés par une autre. Pour comparer l’efficacité des attaques proposées, nous les comparons au décodeur du tatouage à spectre étendu. Il est démontré que l’attaque par remplacement parcimonieux réduit la corrélation normalisée du décodeur de spectre étendu avec un plus grand facteur par rapport à la situation où le décodeur de spectre étendu est attaqué par la transformation MP3 (32 kbps) et 24 kbps USAC.
Resumo:
The convergence between the recent developments in sensing technologies, data science, signal processing and advanced modelling has fostered a new paradigm to the Structural Health Monitoring (SHM) of engineered structures, which is the one based on intelligent sensors, i.e., embedded devices capable of stream processing data and/or performing structural inference in a self-contained and near-sensor manner. To efficiently exploit these intelligent sensor units for full-scale structural assessment, a joint effort is required to deal with instrumental aspects related to signal acquisition, conditioning and digitalization, and those pertaining to data management, data analytics and information sharing. In this framework, the main goal of this Thesis is to tackle the multi-faceted nature of the monitoring process, via a full-scale optimization of the hardware and software resources involved by the {SHM} system. The pursuit of this objective has required the investigation of both: i) transversal aspects common to multiple application domains at different abstraction levels (such as knowledge distillation, networking solutions, microsystem {HW} architectures), and ii) the specificities of the monitoring methodologies (vibrations, guided waves, acoustic emission monitoring). The key tools adopted in the proposed monitoring frameworks belong to the embedded signal processing field: namely, graph signal processing, compressed sensing, ARMA System Identification, digital data communication and TinyML.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
Protocols for the generation of dendritic cells (DCs) using serum as a supplementation of culture media leads to reactions due to animal proteins and disease transmissions. Several types of serum-free media (SFM), based on good manufacture practices (GMP), have recently been used and seem to be a viable option. The aim of this study was to evaluate the results of the differentiation, maturation, and function of DCs from Acute Myeloid Leukemia patients (AML), generated in SFM and medium supplemented with autologous serum (AS). DCs were analyzed by phenotype characteristics, viability, and functionality. The results showed the possibility of generating viable DCs in all the conditions tested. In patients, the X-VIVO 15 medium was more efficient than the other media tested in the generation of DCs producing IL-12p70 (p=0.05). Moreover, the presence of AS led to a significant increase of IL-10 by DCs as compared with CellGro (p=0.05) and X-Vivo15 (p=0.05) media, both in patients and donors. We concluded that SFM was efficient in the production of DCs for immunotherapy in AML patients. However, the use of AS appears to interfere with the functional capacity of the generated DCs.
Resumo:
To evaluate the effectiveness of Reciproc for the removal of cultivable bacteria and endotoxins from root canals in comparison with multifile rotary systems. The root canals of forty human single-rooted mandibular pre-molars were contaminated with an Escherichia coli suspension for 21 days and randomly assigned to four groups according to the instrumentation system: GI - Reciproc (VDW); GII - Mtwo (VDW); GIII - ProTaper Universal (Dentsply Maillefer); and GIV -FKG Race(™) (FKG Dentaire) (n = 10 per group). Bacterial and endotoxin samples were taken with a sterile/apyrogenic paper point before (s1) and after instrumentation (s2). Culture techniques determined the colony-forming units (CFU) and the Limulus Amebocyte Lysate assay was used for endotoxin quantification. Results were submitted to paired t-test and anova. At s1, bacteria and endotoxins were recovered in 100% of the root canals investigated (40/40). After instrumentation, all systems were associated with a highly significant reduction of the bacterial load and endotoxin levels, respectively: GI - Reciproc (99.34% and 91.69%); GII - Mtwo (99.86% and 83.11%); GIII - ProTaper (99.93% and 78.56%) and GIV - FKG Race(™) (99.99% and 82.52%) (P < 0.001). No statistical difference were found amongst the instrumentation systems regarding bacteria and endotoxin removal (P > 0.01). The reciprocating single file, Reciproc, was as effective as the multifile rotary systems for the removal of bacteria and endotoxins from root canals.
Resumo:
The goal of this cross-sectional observational study was to quantify the pattern-shift visual evoked potentials (VEP) and the thickness as well as the volume of retinal layers using optical coherence tomography (OCT) across a cohort of Parkinson's disease (PD) patients and age-matched controls. Forty-three PD patients and 38 controls were enrolled. All participants underwent a detailed neurological and ophthalmologic evaluation. Idiopathic PD cases were included. Cases with glaucoma or increased intra-ocular pressure were excluded. Patients were assessed by VEP and high-resolution Fourier-domain OCT, which quantified the inner and outer thicknesses of the retinal layers. VEP latencies and the thicknesses of the retinal layers were the main outcome measures. The mean age, with standard deviation (SD), of the PD patients and controls were 63.1 (7.5) and 62.4 (7.2) years, respectively. The patients were predominantly in the initial Hoehn-Yahr (HY) disease stages (34.8% in stage 1 or 1.5, and 55.8 % in stage 2). The VEP latencies and the thicknesses as well as the volumes of the retinal inner and outer layers of the groups were similar. A negative correlation between the retinal thickness and the age was noted in both groups. The thickness of the retinal nerve fibre layer (RNFL) was 102.7 μm in PD patients vs. 104.2 μm in controls. The thicknesses of retinal layers, VEP, and RNFL of PD patients were similar to those of the controls. Despite the use of a representative cohort of PD patients and high-resolution OCT in this study, further studies are required to establish the validity of using OCT and VEP measurements as the anatomic and functional biomarkers for the evaluation of retinal and visual pathways in PD patients.
Resumo:
Paper has become increasingly recognized as a very interesting substrate for the construction of microfluidic devices, with potential application in a variety of areas, including health diagnosis, environmental monitoring, immunoassays and food safety. The aim of this review is to present a short history of analytical systems constructed from paper, summarize the main advantages and disadvantages of fabrication techniques, exploit alternative methods of detection such as colorimetric, electrochemical, photoelectrochemical, chemiluminescence and electrochemiluminescence, as well as to take a closer look at the novel achievements in the field of bioanalysis published during the last 2 years. Finally, the future trends for production of such devices are discussed.
Resumo:
This study investigated the influence of cervical preflaring with different rotary instruments on determination of the initial apical file (IAF) in mesiobuccal roots of mandibular molars. Fifty human mandibular molars whose mesial roots presented two clearly separated apical foramens (mesiobuccal and mesiolingual) were used. After standard access opening and removal of pulp tissue, the working length (WL) was determined at 1 mm short of the root apex. Five groups (n=10) were formed at random, according to the type of instrument used for cervical preflaring. In group 1, the size of the IAF was determined without preflaring of the cervical and middle root canal thirds. In groups 2 to 5, preflaring was performed with Gates-Glidden drills, ProTaper instruments, EndoFlare instruments and LA Axxes burs, respectively. Canals were sized manually with K-files, starting with size 08 K-files, inserted passively up to the WL. File sizes were increased until a binding sensation was felt at the WL and the size of the file was recorded. The instrument corresponding to the IAF was fixed into the canal at the WL with methylcyanoacrylate. The teeth were then sectioned transversally 1 mm short of the apex, with the IAF in position. Cross-sections of the WL region were examined under scanning electron microscopy and the discrepancies between canal diameter and the diameter of IAF were calculated using the tool "rule" (FEG) of the microscope's proprietary software. The measurements (µm) were analyzed statistically by Kruskal-Wallis and Dunn's tests at 5% significance level. There were statistically significant differences among the groups (p<0.05). The non-flared group had the greatest discrepancy (125.30 ± 51.54) and differed significantly from all flared groups (p<0.05). Cervical preflaring with LA Axxess burs produced the least discrepancies (55.10 ± 48.31), followed by EndoFlare instruments (68.20 ± 42.44), Gattes Glidden drills (68.90 ± 42.46) and ProTaper files (77.40 ± 73.19). However, no significant differences (p>0.05) were found among the rotary instruments. In conclusion, cervical preflaring improved IAF fitting to the canals at the WL in mesiobuccal roots of maxillary first molars. The rotary instruments evaluated in this study did not differ from each other regarding the discrepancies produced between the IAF size and canal diameter at the WL.
Resumo:
In this study, scanning electron microscopy (SEM) was used to evaluate the adaptation of the first apical file after preflaring in mesiobuccal (MB) and mesiolingual (ML) canals of mandibular molars considering the tactile sensibility as a reference. The mesial canals (n = 22) of human mandibular molar teeth were used, and the first instrument to bind to the working length was determined after preflaring and crown-down shaping. Digital images of the root apex were acquired and a single examiner determined the contact of the file with the walls using Image J software. The results showed that the file was in contact in 47.83% and 31.71% in the MB and ML canals, respectively. When the apexes are fused, the average was 40.03%. A descriptive analysis showed that the first apical file did not touch all dentin walls in any of the samples.
Resumo:
Technical evaluation of analytical data is of extreme relevance considering it can be used for comparisons with environmental quality standards and decision-making as related to the management of disposal of dredged sediments and the evaluation of salt and brackish water quality in accordance with CONAMA 357/05 Resolution. It is, therefore, essential that the project manager discusses the environmental agency's technical requirements with the laboratory contracted for the follow-up of the analysis underway and even with a view to possible re-analysis when anomalous data are identified. The main technical requirements are: (1) method quantitation limits (QLs) should fall below environmental standards; (2) analyses should be carried out in laboratories whose analytical scope is accredited by the National Institute of Metrology (INMETRO) or qualified or accepted by a licensing agency; (3) chain of custody should be provided in order to ensure sample traceability; (4) control charts should be provided to prove method performance; (5) certified reference material analysis or, if that is not available, matrix spike analysis, should be undertaken and (6) chromatograms should be included in the analytical report. Within this context and with a view to helping environmental managers in analytical report evaluation, this work has as objectives the discussion of the limitations of the application of SW 846 US EPA methods to marine samples, the consequences of having data based on method detection limits (MDL) and not sample quantitation limits (SQL), and present possible modifications of the principal method applied by laboratories in order to comply with environmental quality standards.
Resumo:
Colloidal particles have been used to template the electrosynthesis of several materials, such as semiconductors, metals and alloys. The method allows good control over the thickness of the resulting material by choosing the appropriate charge applied to the system, and it is able to produce high density deposited materials without shrinkage. These materials are a true model of the template structure and, due to the high surface areas obtained, are very promising for use in electrochemical applications. In the present work, the assembly of monodisperse polystyrene templates was conduced over gold, platinum and glassy carbon substrates in order to show the electrodeposition of an oxide, a conducting polymer and a hybrid inorganic-organic material with applications in the supercapacitor and sensor fields. The performances of the resulting nanostructured films have been compared with the analogue bulk material and the results achieved are depicted in this paper.
Resumo:
We describe the concept, the fabrication, and the most relevant properties of a piezoelectric-polymer system: Two fluoroethylenepropylene (FEP) films with good electret properties are laminated around a specifically designed and prepared polytetrafluoroethylene (PTFE) template at 300 degrees C. After removing the PTFE template, a two-layer FEP film with open tubular channels is obtained. For electric charging, the two-layer FEP system is subjected to a high electric field. The resulting dielectric barrier discharges inside the tubular channels yield a ferroelectret with high piezoelectricity. d(33) coefficients of up to 160 pC/N have already been achieved on the ferroelectret films. After charging at suitable elevated temperatures, the piezoelectricity is stable at temperatures of at least 130 degrees C. Advantages of the transducer films include ease of fabrication at laboratory or industrial scales, a wide range of possible geometrical and processing parameters, straightforward control of the uniformity of the polymer system, flexibility, and versatility of the soft ferroelectrets, and a large potential for device applications e.g., in the areas of biomedicine, communications, production engineering, sensor systems, environmental monitoring, etc.
Resumo:
The effects of chromium or nickel oxide additions on the composition of Portland clinker were investigated by X-ray powder diffraction associated with pattern analysis by the Rietveld method. The co-processing of industrial waste in Portland cement plants is an alternative solution to the problem of final disposal of hazardous waste. Industrial waste containing chromium or nickel is hazardous and is difficult to dispose of. It was observed that in concentrations up to 1% in mass, the chromium or nickel oxide additions do not cause significant alterations in Portland clinker composition. (C) 2008 International Centre for Diffraction Data.