987 resultados para Polyharmonic order of precision
Resumo:
Laser micromachining is an important material processing technique used in industry and medicine to produce parts with high precision. Control of the material removal process is imperative to obtain the desired part with minimal thermal damage to the surrounding material. Longer pulsed lasers, with pulse durations of milli- and microseconds, are used primarily for laser through-cutting and welding. In this work, a two-pulse sequence using microsecond pulse durations is demonstrated to achieve consistent material removal during percussion drilling when the delay between the pulses is properly defined. The light-matter interaction moves from a regime of surface morphology changes to melt and vapour ejection. Inline coherent imaging (ICI), a broadband, spatially-coherent imaging technique, is used to monitor the ablation process. The pulse parameter space is explored and the key regimes are determined. Material removal is observed when the pulse delay is on the order of the pulse duration. ICI is also used to directly observe the ablation process. Melt dynamics are characterized by monitoring surface changes during and after laser processing at several positions in and around the interaction region. Ablation is enhanced when the melt has time to flow back into the hole before the interaction with the second pulse begins. A phenomenological model is developed to understand the relationship between material removal and pulse delay. Based on melt refilling the interaction region, described by logistic growth, and heat loss, described by exponential decay, the model is fit to several datasets. The fit parameters reflect the pulse energies and durations used in the ablation experiments. For pulse durations of 50 us with pulse energies of 7.32 mJ +/- 0.09 mJ, the logisitic growth component of the model reaches half maximum after 8.3 us +/- 1.1 us and the exponential decays with a rate of 64 us +/- 15 us. The phenomenological model offers an interpretation of the material removal process.
Resumo:
Repositories containing high quality human biospecimens linked with robust and relevant clinical and pathological information are required for the discovery and validation of biomarkers for disease diagnosis, progression and response to treatment. Current molecular based discovery projects using either low or high throughput technologies rely heavily on ready access to such sample collections. It is imperative that modern biobanks align with molecular diagnostic pathology practices not only to provide the type of samples needed for discovery projects but also to ensure requirements for ongoing sample collections and the future needs of researchers are adequately addressed. Biobanks within comprehensive molecular pathology programmes are perfectly positioned to offer more than just tumour derived biospecimens; for example, they have the ability to facilitate researchers gaining access to sample metadata such as digitised scans of tissue samples annotated prior to macrodissection for molecular diagnostics or pseudoanonymised clinical outcome data or research results retrieved from other users utilising the same or overlapping cohorts of samples. Furthermore, biobanks can work with molecular diagnostic laboratories to develop standardized methodologies for the acquisition and storage of samples required for new approaches to research such as ‘liquid biopsies’ which will ultimately feed into the test validations required in large prospective clinical studies in order to implement liquid biopsy approaches for routine clinical practice. We draw on our experience in Northern Ireland to discuss how this harmonised approach of biobanks working synergistically with molecular pathology programmes is key for the future success of precision medicine.
Resumo:
Following the intrinsically linked balance sheets in his Capital Formation Life Cycle, Lukas M. Stahl explains with his Triple A Model of Accounting, Allocation and Accountability the stages of the Capital Formation process from FIAT to EXIT. Based on the theoretical foundations of legal risk laid by the International Bar Association with the help of Roger McCormick and legal scholars such as Joanna Benjamin, Matthew Whalley and Tobias Mahler, and founded on the basis of Wesley Hohfeld’s category theory of jural relations, Stahl develops his mutually exclusive Four Determinants of Legal Risk of Law, Lack of Right, Liability and Limitation. Those Four Determinants of Legal Risk allow us to apply, assess, and precisely describe the respective legal risk at all stages of the Capital Formation Life Cycle as demonstrated in case studies of nine industry verticals of the proposed and currently negotiated Transatlantic Trade and Investment Partnership between the United States of America and the European Union, TTIP, as well as in the case of the often cited financing relation between the United States and the People’s Republic of China. Having established the Four Determinants of Legal Risk and its application to the Capital Formation Life Cycle, Stahl then explores the theoretical foundations of capital formation, their historical basis in classical and neo-classical economics and its forefathers such as The Austrians around Eugen von Boehm-Bawerk, Ludwig von Mises and Friedrich von Hayek and most notably and controversial, Karl Marx, and their impact on today’s exponential expansion of capital formation. Starting off with the first pillar of his Triple A Model, Accounting, Stahl then moves on to explain the Three Factors of Capital Formation, Man, Machines and Money and shows how “value-added” is created with respect to the non-monetary capital factors of human resources and industrial production. Followed by a detailed analysis discussing the roles of the Three Actors of Monetary Capital Formation, Central Banks, Commercial Banks and Citizens Stahl readily dismisses a number of myths regarding the creation of money providing in-depth insight into the workings of monetary policy makers, their institutions and ultimate beneficiaries, the corporate and consumer citizens. In his second pillar, Allocation, Stahl continues his analysis of the balance sheets of the Capital Formation Life Cycle by discussing the role of The Five Key Accounts of Monetary Capital Formation, the Sovereign, Financial, Corporate, Private and International account of Monetary Capital Formation and the associated legal risks in the allocation of capital pursuant to his Four Determinants of Legal Risk. In his third pillar, Accountability, Stahl discusses the ever recurring Crisis-Reaction-Acceleration-Sequence-History, in short: CRASH, since the beginning of the millennium starting with the dot-com crash at the turn of the millennium, followed seven years later by the financial crisis of 2008 and the dislocations in the global economy we are facing another seven years later today in 2015 with several sordid debt restructurings under way and hundred thousands of refugees on the way caused by war and increasing inequality. Together with the regulatory reactions they have caused in the form of so-called landmark legislation such as the Sarbanes-Oxley Act of 2002, the Dodd-Frank Act of 2010, the JOBS Act of 2012 or the introduction of the Basel Accords, Basel II in 2004 and III in 2010, the European Financial Stability Facility of 2010, the European Stability Mechanism of 2012 and the European Banking Union of 2013, Stahl analyses the acceleration in size and scope of crises that appears to find often seemingly helpless bureaucratic responses, the inherent legal risks and the complete lack of accountability on part of those responsible. Stahl argues that the order of the day requires to address the root cause of the problems in the form of two fundamental design defects of our Global Economic Order, namely our monetary and judicial order. Inspired by a 1933 plan of nine University of Chicago economists abolishing the fractional reserve system, he proposes the introduction of Sovereign Money as a prerequisite to void misallocations by way of judicial order in the course of domestic and transnational insolvency proceedings including the restructuring of sovereign debt throughout the entire monetary system back to its origin without causing domino effects of banking collapses and failed financial institutions. In recognizing Austrian-American economist Schumpeter’s Concept of Creative Destruction, as a process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one, Stahl responds to Schumpeter’s economic chemotherapy with his Concept of Equitable Default mimicking an immunotherapy that strengthens the corpus economicus own immune system by providing for the judicial authority to terminate precisely those misallocations that have proven malignant causing default perusing the century old common law concept of equity that allows for the equitable reformation, rescission or restitution of contract by way of judicial order. Following a review of the proposed mechanisms of transnational dispute resolution and current court systems with transnational jurisdiction, Stahl advocates as a first step in order to complete the Capital Formation Life Cycle from FIAT, the creation of money by way of credit, to EXIT, the termination of money by way of judicial order, the institution of a Transatlantic Trade and Investment Court constituted by a panel of judges from the U.S. Court of International Trade and the European Court of Justice by following the model of the EFTA Court of the European Free Trade Association. Since the first time his proposal has been made public in June of 2014 after being discussed in academic circles since 2011, his or similar proposals have found numerous public supporters. Most notably, the former Vice President of the European Parliament, David Martin, has tabled an amendment in June 2015 in the course of the negotiations on TTIP calling for an independent judicial body and the Member of the European Commission, Cecilia Malmström, has presented her proposal of an International Investment Court on September 16, 2015. Stahl concludes, that for the first time in the history of our generation it appears that there is a real opportunity for reform of our Global Economic Order by curing the two fundamental design defects of our monetary order and judicial order with the abolition of the fractional reserve system and the introduction of Sovereign Money and the institution of a democratically elected Transatlantic Trade and Investment Court that commensurate with its jurisdiction extending to cases concerning the Transatlantic Trade and Investment Partnership may complete the Capital Formation Life Cycle resolving cases of default with the transnational judicial authority for terminal resolution of misallocations in a New Global Economic Order without the ensuing dangers of systemic collapse from FIAT to EXIT.
Resumo:
An automated on-line SPE-LC-MS/MS method was developed for the quantitation of multiple classes of antibiotics in environmental waters. High sensitivity in the low ng/L range was accomplished by using large volume injections with 10-mL of sample. Positive confirmation of analytes was achieved using two selected reaction monitoring (SRM) transitions per antibiotic and quantitation was performed using an internal standard approach. Samples were extracted using online solid phase extraction, then using column switching technique; extracted samples were immediately passed through liquid chromatography and analyzed by tandem mass spectrometry. The total run time per each sample was 20 min. The statistically calculated method detection limits for various environmental samples were between 1.2 and 63 ng/L. Furthermore, the method was validated in terms of precision, accuracy and linearity. The developed analytical methodology was used to measure the occurrence of antibiotics in reclaimed waters (n=56), surface waters (n=53), ground waters (n=8) and drinking waters (n=54) collected from different parts of South Florida. In reclaimed waters, the most frequently detected antibiotics were nalidixic acid, erythromycin, clarithromycin, azithromycin trimethoprim, sulfamethoxazole and ofloxacin (19.3-604.9 ng/L). Detection of antibiotics in reclaimed waters indicates that they can’t be completely removed by conventional wastewater treatment process. Furthermore, the average mass loads of antibiotics released into the local environment through reclaimed water were estimated as 0.248 Kg/day. Among the surface waters samples, Miami River (reaching up to 580 ng/L) and Black Creek canal (up to 124 ng/L) showed highest concentrations of antibiotics. No traces of antibiotics were found in ground waters. On the other hand, erythromycin (monitored as anhydro erythromycin) was detected in 82% of the drinking water samples (n.d-66 ng/L). The developed approach is suitable for both research and monitoring applications. Major metabolites of antibiotics in reclaimed wates were identified and quantified using high resolution benchtop Q-Exactive orbitrap mass spectrometer. A phase I metabolite of erythromycin was tentatively identified in full scan based on accurate mass measurement. Using extracted ion chromatogram (XIC), high resolution data-dependent MS/MS spectra and metabolic profiling software the metabolite was identified as desmethyl anhydro erythromycin with molecular formula C36H63NO12 and m/z 702.4423. The molar concentration of the metabolite to erythromycin was in the order of 13 %. To my knowledge, this is the first known report on this metabolite in reclaimed water. Another compound acetyl-sulfamethoxazole, a phase II metabolite of sulfamethoxazole was also identified in reclaimed water and mole fraction of the metabolite represent 36 %, of the cumulative sulfamethoxazole concentration. The results were illustrating the importance to include metabolites also in the routine analysis to obtain a mass balance for better understanding of the occurrence, fate and distribution of antibiotics in the environment. Finally, all the antibiotics detected in reclaimed and surface waters were investigated to assess the potential risk to the aquatic organisms. The surface water antibiotic concentrations that represented the real time exposure conditions revealed that the macrolide antibiotics, erythromycin, clarithromycin and tylosin along with quinolone antibiotic, ciprofloxacin were suspected to induce high toxicity to aquatic biota. Preliminary results showing that, among the antibiotic groups tested, macrolides posed the highest ecological threat, and therefore, they may need to be further evaluated with, long-term exposure studies considering bioaccumulation factors and more number of species selected. Overall, the occurrence of antibiotics in aquatic environment is posing an ecological health concern.
Resumo:
Site-specific management (SSM) is a form of precision agriculture whereby decisions on resource application and agronomic practices are improved to better match soil and crop requirements as they vary in the field. SSM enables the identification of regions (homogeneous management zones) within the area delimited by field boundaries. These subfield regions constitute areas that have similar permanent characteristics. Traditional soil and pasture sampling and the necessary laboratory analysis are time-consuming, labour-intensive and cost prohibitive, not viable from a SSM perspective because it needs a large number of soil and pasture samples in order to achieve a good representation of soil properties, nutrient levels and pasture quality and productivity. The main objective of this work was to evaluate technologies which have potential for monitoring aspects related to spatial and temporal variability of soil nutrients and pasture green and dry matter yield (respectively, GM and DM, in kg/ha) and support to decision making for the farmer. Three types of sensors were evaluated in a 7ha pasture experimental field: an electromagnetic induction sensor (“DUALEM 1S”, which measures the soil apparent electrical conductivity, ECa), an active optical sensor ("OptRx®", which measures the NDVI, “Normalized Difference Vegetation Index”) and a capacitance probe ("GrassMaster II" which estimates plant mass). The results indicate the possibility of using a soil electrical conductivity probe as, probably, the best tool for monitoring not only some of the characteristics of the soil, but also those of the pasture, which could represent an important help in simplifying the process of sampling and support SSM decision making, in precision agriculture projects. On the other hand, the significant and very strong correlations obtained between capacitance and NDVI and between any of these parameters and the pasture productivity shows the potential of these tools for monitoring the evolution of spatial and temporal patterns of the vegetative growth of biodiverse pasture, for identifying different plant species and variability in pasture yield in Alentejo dry-land farming systems. These results are relevant for the selection of an adequate sensing system for a particular application and open new perspectives for other works that would allow the testing, calibration and validation of the sensors in a wider range of pasture production conditions, namely the extraordinary diversity of botanical species that are characteristic of the Mediterranean region at the different periods of the year.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
Resumo:
With the advent of new technologies it is increasingly easier to find data of different nature from even more accurate sensors that measure the most disparate physical quantities and with different methodologies. The collection of data thus becomes progressively important and takes the form of archiving, cataloging and online and offline consultation of information. Over time, the amount of data collected can become so relevant that it contains information that cannot be easily explored manually or with basic statistical techniques. The use of Big Data therefore becomes the object of more advanced investigation techniques, such as Machine Learning and Deep Learning. In this work some applications in the world of precision zootechnics and heat stress accused by dairy cows are described. Experimental Italian and German stables were involved for the training and testing of the Random Forest algorithm, obtaining a prediction of milk production depending on the microclimatic conditions of the previous days with satisfactory accuracy. Furthermore, in order to identify an objective method for identifying production drops, compared to the Wood model, typically used as an analytical model of the lactation curve, a Robust Statistics technique was used. Its application on some sample lactations and the results obtained allow us to be confident about the use of this method in the future.
Resumo:
In the Era of precision medicine and big medical data sharing, it is necessary to solve the work-flow of digital radiological big data in a productive and effective way. In particular, nowadays, it is possible to extract information “hidden” in digital images, in order to create diagnostic algorithms helping clinicians to set up more personalized therapies, which are in particular targets of modern oncological medicine. Digital images generated by the patient have a “texture” structure that is not visible but encrypted; it is “hidden” because it cannot be recognized by sight alone. Thanks to artificial intelligence, pre- and post-processing software and generation of mathematical calculation algorithms, we could perform a classification based on non-visible data contained in radiological images. Being able to calculate the volume of tissue body composition could lead to creating clasterized classes of patients inserted in standard morphological reference tables, based on human anatomy distinguished by gender and age, and maybe in future also by race. Furthermore, the branch of “morpho-radiology" is a useful modality to solve problems regarding personalized therapies, which is particularly needed in the oncological field. Actually oncological therapies are no longer based on generic drugs but on target personalized therapy. The lack of gender and age therapies table could be filled thanks to morpho-radiology data analysis application.
Resumo:
The present thesis focuses on the permebility analisys of Aquivion® 980 Perfluoro sulfonic acid (PFSA) polymer with particular reference to the influence of the equivalent weight (gram of polymer per molSO3H) on the permeation properties. Aquivion grade tested, indeed, were characterized by a lower equivalent weight ( 870 g/molSO3H against 980 of the present material) with respect to data present in the open literature. Permeability of different gases (CO2, N2, and CH4) was tested at different temperatures and different humidity, a parameter which greatly influences the gas transport in such hydrophilic material- Aquivion® swells consistently in humid conditions increasing its gas permeability of more than one order of magnitude with respect to values prevailing in dry conditions. Present data confirm such behavior being the permeability of all gases and vapors tested substantially increased in presence of water. Interestingly the increase in permeability results be similar for all the gases inspected, hence such enhanced permeation capability is not associated to a selectivity loss that happens in polymeric membranes. Although, the results, of CO2, are lower compared to those obtained with the different grades, with lower equivalent weight, of Aquivion, thus suggesting that an increase of this parameter is detrimental for both permeability and selectivity of the membranes with respect to CO2. This is likely related to the fact that a lower content of SO3H groups makes it difficult to have an interconnected water domain inside the membranes. A modeling approach was considered to describe the experimental data and to give a better insight into the observed behavior, unfortunately, it resulted not sensitive enough to catch the differences between the gas permeability in PSFAs with high and low equivalent weight. The latter were indeed usually contained within 10-20% which results to be the in the same range of model precision when used in a predictive way.
Resumo:
Membrane microdomains enriched in cholesterol, sphingolipids (rafts), and specific proteins are involved in important physiological functions. However their structure, size and stability are still controversial. Given that detergent-resistant membranes (DRMs) are in the liquid-ordered state and are rich in raft-like components, they might correspond to rafts at least to some extent. Here we monitor the lateral order of biological membranes by characterizing DRMs from erythrocytes obtained with Brij-98, Brij-58, and TX-100 at 4 °C and 37 °C. All DRMs were enriched in cholesterol and contained the raft markers flotillin-2 and stomatin. However, sphingomyelin (SM) was only found to be enriched in TX-100-DRMs - a detergent that preferentially solubilizes the membrane inner leaflet - while Band 3 was present solely in Brij-DRMs. Electron paramagnetic resonance spectra showed that the acyl chain packing of Brij-DRMs was lower than TX-100-DRMs, providing evidence of their diverse lipid composition. Fatty acid analysis revealed that the SM fraction of the DRMs was enriched in lignoceric acid, which should specifically contribute to the resistance of SM to detergents. These results indicate that lipids from the outer leaflet, particularly SM, are essential for the formation of the liquid-ordered phase of DRMs. At last, the differential solubilization process induced by Brij-98 and TX-100 was monitored using giant unilamellar vesicles. This study suggests that Brij and TX-100-DRMs reflect different degrees of lateral order of the membrane microdomains. Additionally, Brij DRMs are composed by both inner and outer leaflet components, making them more physiologically relevant than TX-100-DRMs to the studies of membrane rafts.
Resumo:
This study evaluated in vitro the antibacterial activity of 4 root canal filling materials for primary teeth - zinc oxide and eugenol cement (ZOE), Calen paste thickened with zinc oxide (Calen/ZO), Sealapex sealer and EndoREZ sealer - against 5 bacterial strains commonly found in endodontic infections (Kocuria rhizophila, Enterococcus faecalis, Streptococcus mutans, Escherichia coli and Staphylococcus aureus) using the agar diffusion test (agar-well technique). Calen paste, 1% chlorhexidine digluconate (CHX) and distilled water served as controls. Seven wells per dish were made at equidistant points and immediately filled with the test and control materials. After incubation of the plates at 37oC for 24 h, the diameter of the zones of bacterial growth inhibition produced around the wells was measured (in mm) with a digital caliper under reflected light. Data were analyzed statistically by analysis of variance and Tukey's post-hoc test (?=0.05). There were statistically significant differences (p<0.0001) among the zones of bacterial growth inhibition produced by the different materials against all target microorganisms. K. rhizophila was inhibited more effectively (p<0.05) by ZOE, while Calen/ZO had its highest antibacterial activity against E. faecalis (p<0.05). S. mutans was inhibited by Calen/ZO, Sealapex and ZOE in the same intensity (p>0.05). E. coli was inhibited more effectively (p<0.05) by ZOE, followed by Calen/ZO and Sealapex. Calen/ZO and ZOE were equally effective (p>0.05) against S. aureus, while Sealapex had the lowest antibacterial efficacy (p<0.05) against this microorganism. EndoREZ presented antibacterial activity only against K. rhizophila and S. aureus. The Calen paste and Calen/ZO produced larger zones of inhibition than 1% CHX when the marker microorganism was E faecalis. In conclusion, the in vitro antibacterial activity of the 4 root canal filling materials for primary teeth against bacterial strains commonly found in endodontic infections can be presented in a decreasing order of efficacy as follows: ZOE>Calen/ZO>Sealapex>EndoREZ.
Resumo:
PURPOSE: The main goal of this study was to develop and compare two different techniques for classification of specific types of corneal shapes when Zernike coefficients are used as inputs. A feed-forward artificial Neural Network (NN) and discriminant analysis (DA) techniques were used. METHODS: The inputs both for the NN and DA were the first 15 standard Zernike coefficients for 80 previously classified corneal elevation data files from an Eyesys System 2000 Videokeratograph (VK), installed at the Departamento de Oftalmologia of the Escola Paulista de Medicina, São Paulo. The NN had 5 output neurons which were associated with 5 typical corneal shapes: keratoconus, with-the-rule astigmatism, against-the-rule astigmatism, "regular" or "normal" shape and post-PRK. RESULTS: The NN and DA responses were statistically analyzed in terms of precision ([true positive+true negative]/total number of cases). Mean overall results for all cases for the NN and DA techniques were, respectively, 94% and 84.8%. CONCLUSION: Although we used a relatively small database, results obtained in the present study indicate that Zernike polynomials as descriptors of corneal shape may be a reliable parameter as input data for diagnostic automation of VK maps, using either NN or DA.
Resumo:
A method to compute three-dimension (3D) left ventricle (LV) motion and its color coded visualization scheme for the qualitative analysis in SPECT images is proposed. It is used to investigate some aspects of Cardiac Resynchronization Therapy (CRT). The method was applied to 3D gated-SPECT images sets from normal subjects and patients with severe Idiopathic Heart Failure, before and after CRT. Color coded visualization maps representing the LV regional motion showed significant difference between patients and normal subjects. Moreover, they indicated a difference between the two groups. Numerical results of regional mean values representing the intensity and direction of movement in radial direction are presented. A difference of one order of magnitude in the intensity of the movement on patients in relation to the normal subjects was observed. Quantitative and qualitative parameters gave good indications of potential application of the technique to diagnosis and follow up of patients submitted to CRT.
Resumo:
We use multiwavelength data (H I, FUV, NUV, R) to search for evidence of star formation in the intragroup medium of the Hickson Compact Group 100. We find that young star-forming regions are located in the intergalactic H I clouds of the compact group which extend to over 130 kpc away from the main galaxies. A tidal dwarf galaxy (TDG) candidate is located in the densest region of the H I tail, 61 kpc from the brightest group member and its age is estimated to be only 3.3 Myr. Fifteen other intragroup H II regions and TDG candidates are detected in the Galaxy Evolution Explorer (GALEX) FUV image and within a field 10' x 10' encompassing the H I tail. They have ages <200 Myr, H I masses of 10(9.2-10.4) M(circle dot), 0.001
Resumo:
Context. Dwarf irregular galaxies are relatively simple unevolved objects where it is easy to test models of galactic chemical evolution. Aims. We attempt to determine the star formation and gas accretion history of IC 10, a local dwarf irregular for which abundance, gas, and mass determinations are available. Methods. We apply detailed chemical evolution models to predict the evolution of several chemical elements (He, O, N, S) and compared our predictions with the observational data. We consider additional constraints such as the present-time gas fraction, the star formation rate (SFR), and the total estimated mass of IC 10. We assume a dark matter halo for this galaxy and study the development of a galactic wind. We consider different star formation regimes: bursting and continuous. We explore different wind situations: i) normal wind, where all the gas is lost at the same rate and ii) metal-enhanced wind, where metals produced by supernovae are preferentially lost. We study a case without wind. We vary the star formation efficiency (SFE), the wind efficiency, and the time scale of the gas infall, which are the most important parameters in our models. Results. We find that only models with metal-enhanced galactic winds can reproduce the properties of IC 10. The star formation must have proceeded in bursts rather than continuously and the bursts must have been less numerous than similar to 10 over the whole galactic lifetime. Finally, IC 10 must have formed by a slow process of gas accretion with a timescale of the order of 8 Gyr.