840 resultados para binary to multi-class classifiers
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We appreciate your being with us this afternoon as we celebrate the accomplishments of tomorrow's graduates from the University of Nebraska-Lincoln. These students truly are a reason for celebration. Thank you for attending and congratulations to the class of 2008! This Salute to the Graduates ceremony has become a wonderful tradition in the College of Agricultural Sciences and Natural Resources.
Resumo:
Positive selection (PS) in the thymus involves the presentation of self-peptides that are bound to MHC class II on the surface of cortical thymus epithelial cells (cTECs). Prss16 gene corresponds to one important element regulating the PS of CD4(+) T lymphocytes, which encodes Thymus-specific serine protease (Tssp), a cTEC serine-type peptidase involved in the proteolytic generation of self-peptides. Nevertheless, additional peptidase genes participating in the generation of self-peptides need to be found. Because of its role in the mechanism of PS and its expression in cTECs, the Prss16 gene might be used as a transcriptional marker to identify new genes that share the same expression profile and that encode peptidases in the thymus. To test this hypothesis, we compared the differential thymic expression of 4,500 mRNAs of wild-type (WT) C57BL/6 mice with their respective Prss16-knockout (KO) mutants by using microarrays. From these, 223 genes were differentially expressed, of which 115 had known molecular/biological functions. Four endopeptidase genes (Casp1, Casp2, Psmb3 and Tpp2) share the same expression profile as the Prss16 gene; i.e., induced in WT and repressed in KO while one endopeptidase gene, Capns1, features opposite expression profile. The Tpp2 gene is highlighted because it encodes a serine-type endopeptidase functionally similar to the Tssp enzyme. Profiling of the KO mice featured down-regulation of Prss16, as expected, along with the genes mentioned above. Considering that the Prss16-KO mice featured impaired PS, the shared regulation of the four endopeptidase genes suggested their participation in the mechanism of self-peptide generation and PS.
Resumo:
Objectives. This study recorded and evaluated the intra-and inter-group agreement degree by different examiners for the classification of lower third molars according to both the Winter's and Pell & Gregory's systems. Study Design. An observational and cross-sectional study was realized with forty lower third molars analyzed from twenty digital panoramic radiographs. Four examiner groups (undergraduates, maxillofacial surgeons, oral radiologists and clinical dentists) from Aracaju, Sergipe, Brazil, classified them in relation to angulation, class and position. The variance test (ANOVA) was applied in the examiner findings with significance level of p<0.05 and confidence intervals of 95%. Results. Intra- and inter-group agreement was observed in Winter's classification system among all examiners. Pell & Gregory's classification system showed an average intra-group agreement and a statistical significant difference to position variable in inter-group analysis with greater disagreement to the clinical dentists group (p<0.05). Conclusions. High reproducibility was associated to Winter's classification, whereas the system proposed by Pell & Gregory did not demonstrate appropriate levels of reliability.
Resumo:
Abstract Background Considering the increasing use of polymyxins to treat infections due to multidrug resistant Gram-negative in many countries, it is important to evaluate different susceptibility testing methods to this class of antibiotic. Methods Susceptibility of 109 carbapenem-resistant P. aeruginosa to polymyxins was tested comparing broth microdilution (reference method), disc diffusion, and Etest using the new interpretative breakpoints of Clinical and Laboratory Standards Institute. Results Twenty-nine percent of isolates belonged to endemic clone and thus, these strains were excluded of analysis. Among 78 strains evaluated, only one isolate was resistant to polymyxin B by the reference method (MIC: 8.0 μg/mL). Very major and major error rates of 1.2% and 11.5% were detected comparing polymyxin B disc diffusion with the broth microdilution (reference method). Agreement within 1 twofold dilution between Etest and the broth microdilution were 33% for polymyxin B and 79.5% for colistin. One major error and 48.7% minor errors were found comparing polymyxin B Etest with broth microdilution and only 6.4% minor errors with colistin. The concordance between Etest and the broth microdilution (reference method) was respectively 100% for colistin and 90% for polymyxin B. Conclusion Resistance to polymyxins seems to be rare among hospital carbapenem-resistant P. aeruginosa isolates over a six-year period. Our results showed, using the new CLSI criteria, that the disc diffusion susceptibility does not report major errors (false-resistant results) for colistin. On the other hand, showed a high frequency of minor errors and 1 very major error for polymyxin B. Etest presented better results for colistin than polymyxin B. Until these results are reproduced with a large number of polymyxins-resistant P. aeruginosa isolates, susceptibility to polymyxins should be confirmed by a reference method.
Resumo:
In this thesis, numerical methods aiming at determining the eigenfunctions, their adjoint and the corresponding eigenvalues of the two-group neutron diffusion equations representing any heterogeneous system are investigated. First, the classical power iteration method is modified so that the calculation of modes higher than the fundamental mode is possible. Thereafter, the Explicitly-Restarted Arnoldi method, belonging to the class of Krylov subspace methods, is touched upon. Although the modified power iteration method is a computationally-expensive algorithm, its main advantage is its robustness, i.e. the method always converges to the desired eigenfunctions without any need from the user to set up any parameter in the algorithm. On the other hand, the Arnoldi method, which requires some parameters to be defined by the user, is a very efficient method for calculating eigenfunctions of large sparse system of equations with a minimum computational effort. These methods are thereafter used for off-line analysis of the stability of Boiling Water Reactors. Since several oscillation modes are usually excited (global and regional oscillations) when unstable conditions are encountered, the characterization of the stability of the reactor using for instance the Decay Ratio as a stability indicator might be difficult if the contribution from each of the modes are not separated from each other. Such a modal decomposition is applied to a stability test performed at the Swedish Ringhals-1 unit in September 2002, after the use of the Arnoldi method for pre-calculating the different eigenmodes of the neutron flux throughout the reactor. The modal decomposition clearly demonstrates the excitation of both the global and regional oscillations. Furthermore, such oscillations are found to be intermittent with a time-varying phase shift between the first and second azimuthal modes.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
Mycotoxins are contaminants of agricultural products both in the field and during storage and can enter the food chain through contaminated cereals and foods (milk, meat, and eggs) obtained from animals fed mycotoxin contaminated feeds. Mycotoxins are genotoxic carcinogens that cause health and economic problems. Ochratoxin A and fumonisin B1 have been classified by the International Agency for Research on Cancer in 1993, as “possibly carcinogenic to humans” (class 2B). To control mycotoxins induced damages, different strategies have been developed to reduce the growth of mycotoxigenic fungi as well as to decontaminate and/or detoxify mycotoxin contaminated foods and animal feeds. Critical points, target for these strategies, are: prevention of mycotoxin contamination, detoxification of mycotoxins already present in food and feed, inhibition of mycotoxin absorption in the gastrointestinal tract, reduce mycotoxin induced damages when absorption occurs. Decontamination processes, as indicate by FAO, needs the following requisites to reduce toxic and economic impact of mycotoxins: it must destroy, inactivate, or remove mycotoxins; it must not produce or leave toxic and/or carcinogenic/mutagenic residues in the final products or in food products obtained from animals fed decontaminated feed; it must be capable of destroying fungal spores and mycelium in order to avoiding mycotoxin formation under favorable conditions; it should not adversely affect desirable physical and sensory properties of the feedstuff; it has to be technically and economically feasible. One important approach to the prevention of mycotoxicosis in livestock is the addition in the diets of the non-nutritionally adsorbents that bind mycotoxins preventing the absorption in the gastrointestinal tract. Activated carbons, hydrated sodium calcium aluminosilicate (HSCAS), zeolites, bentonites, and certain clays, are the most studied adsorbent and they possess a high affinity for mycotoxins. In recent years, there has been increasing interest on the hypothesis that the absorption in consumed food can be inhibited by microorganisms in the gastrointestinal tract. Numerous investigators showed that some dairy strains of LAB and bifidobacteria were able to bind aflatoxins effectively. There is a strong need for prevention of the mycotoxin-induced damages once the toxin is ingested. Nutritional approaches, such as supplementation of nutrients, food components, or additives with protective effects against mycotoxin toxicity are assuming increasing interest. Since mycotoxins have been known to produce damages by increasing oxidative stress, the protective properties of antioxidant substances have been extensively investigated. Purpose of the present study was to investigate in vitro and in vivo, strategies to counteract mycotoxin threat particularly in swine husbandry. The Ussing chambers technique was applied in the present study that for the first time to investigate in vitro the permeability of OTA and FB1 through rat intestinal mucosa. Results showed that OTA and FB1 were not absorbed from rat small intestine mucosa. Since in vivo absorption of both mycotoxins normally occurs, it is evident that in these experimental conditions Ussing diffusion chambers were not able to assess the intestinal permeability of OTA and FB1. A large number of LAB strains isolated from feces and different gastrointestinal tract regions of pigs and poultry were screened for their ability to remove OTA, FB1, and DON from bacterial medium. Results of this in vitro study showed low efficacy of isolated LAB strains to reduce OTA, FB1, and DON from bacterial medium. An in vivo trial in rats was performed to evaluate the effects of in-feed supplementation of a LAB strain, Pediococcus pentosaceus FBB61, to counteract the toxic effects induced by exposure to OTA contaminated diets. The study allows to conclude that feed supplementation with P. pentosaceus FBB61 ameliorates the oxidative status in liver, and lowers OTA induced oxidative damage in liver and kidney if diet was contaminated by OTA. This P. pentosaceus FBB61 feature joined to its bactericidal activity against Gram positive bacteria and its ability to modulate gut microflora balance in pigs, encourage additional in vivo experiments in order to better understand the potential role of P. pentosaceus FBB61 as probiotic for farm animals and humans. In the present study, in vivo trial on weaned piglets fed FB1 allow to conclude that feeding of 7.32 ppm of FB1 for 6 weeks did not impair growth performance. Deoxynivalenol contamination of feeds was evaluated in an in vivo trial on weaned piglets. The comparison between growth parameters of piglets fed DON contaminated diet and contaminated diet supplemented with the commercial product did not reach the significance level but piglet growth performances were numerically improved when the commercial product was added to DON contaminated diet. Further studies are needed to improve knowledge on mycotoxins intestinal absorption, mechanism for their detoxification in feeds and foods, and nutritional strategies to reduce mycotoxins induced damages in animals and humans. The multifactorial approach acting on each of the various steps could be a promising strategy to counteract mycotoxins damages.
Resumo:
Die Flachwassergleichungen (SWE) sind ein hyperbolisches System von Bilanzgleichungen, die adäquate Approximationen an groß-skalige Strömungen der Ozeane, Flüsse und der Atmosphäre liefern. Dabei werden Masse und Impuls erhalten. Wir unterscheiden zwei charakteristische Geschwindigkeiten: die Advektionsgeschwindigkeit, d.h. die Geschwindigkeit des Massentransports, und die Geschwindigkeit von Schwerewellen, d.h. die Geschwindigkeit der Oberflächenwellen, die Energie und Impuls tragen. Die Froude-Zahl ist eine Kennzahl und ist durch das Verhältnis der Referenzadvektionsgeschwindigkeit zu der Referenzgeschwindigkeit der Schwerewellen gegeben. Für die oben genannten Anwendungen ist sie typischerweise sehr klein, z.B. 0.01. Zeit-explizite Finite-Volume-Verfahren werden am öftersten zur numerischen Berechnung hyperbolischer Bilanzgleichungen benutzt. Daher muss die CFL-Stabilitätsbedingung eingehalten werden und das Zeitinkrement ist ungefähr proportional zu der Froude-Zahl. Deswegen entsteht bei kleinen Froude-Zahlen, etwa kleiner als 0.2, ein hoher Rechenaufwand. Ferner sind die numerischen Lösungen dissipativ. Es ist allgemein bekannt, dass die Lösungen der SWE gegen die Lösungen der Seegleichungen/ Froude-Zahl Null SWE für Froude-Zahl gegen Null konvergieren, falls adäquate Bedingungen erfüllt sind. In diesem Grenzwertprozess ändern die Gleichungen ihren Typ von hyperbolisch zu hyperbolisch.-elliptisch. Ferner kann bei kleinen Froude-Zahlen die Konvergenzordnung sinken oder das numerische Verfahren zusammenbrechen. Insbesondere wurde bei zeit-expliziten Verfahren falsches asymptotisches Verhalten (bzgl. der Froude-Zahl) beobachtet, das diese Effekte verursachen könnte.Ozeanographische und atmosphärische Strömungen sind typischerweise kleine Störungen eines unterliegenden Equilibriumzustandes. Wir möchten, dass numerische Verfahren für Bilanzgleichungen gewisse Equilibriumzustände exakt erhalten, sonst können künstliche Strömungen vom Verfahren erzeugt werden. Daher ist die Quelltermapproximation essentiell. Numerische Verfahren die Equilibriumzustände erhalten heißen ausbalanciert.rnrnIn der vorliegenden Arbeit spalten wir die SWE in einen steifen, linearen und einen nicht-steifen Teil, um die starke Einschränkung der Zeitschritte durch die CFL-Bedingung zu umgehen. Der steife Teil wird implizit und der nicht-steife explizit approximiert. Dazu verwenden wir IMEX (implicit-explicit) Runge-Kutta und IMEX Mehrschritt-Zeitdiskretisierungen. Die Raumdiskretisierung erfolgt mittels der Finite-Volumen-Methode. Der steife Teil wird mit Hilfe von finiter Differenzen oder au eine acht mehrdimensional Art und Weise approximniert. Zur mehrdimensionalen Approximation verwenden wir approximative Evolutionsoperatoren, die alle unendlich viele Informationsausbreitungsrichtungen berücksichtigen. Die expliziten Terme werden mit gewöhnlichen numerischen Flüssen approximiert. Daher erhalten wir eine Stabilitätsbedingung analog zu einer rein advektiven Strömung, d.h. das Zeitinkrement vergrößert um den Faktor Kehrwert der Froude-Zahl. Die in dieser Arbeit hergeleiteten Verfahren sind asymptotisch erhaltend und ausbalanciert. Die asymptotischer Erhaltung stellt sicher, dass numerische Lösung das &amp;quot;korrekte&amp;quot; asymptotische Verhalten bezüglich kleiner Froude-Zahlen besitzt. Wir präsentieren Verfahren erster und zweiter Ordnung. Numerische Resultate bestätigen die Konvergenzordnung, so wie Stabilität, Ausbalanciertheit und die asymptotische Erhaltung. Insbesondere beobachten wir bei machen Verfahren, dass die Konvergenzordnung fast unabhängig von der Froude-Zahl ist.
Resumo:
OBJECTIVES: This study evaluated the initial and the artificially aged push-out bond strength between ceramic and dentin produced by one of five resin cements. METHODS: Two-hundred direct ceramic restorations (IPS Empress CAD) were luted to standardized Class I cavities in extracted human molars using one of four self-adhesive cements (SpeedCEM, RelyX Unicem Aplicap, SmartCem2 and iCEM) or a reference etch-and-rinse resin cement (Syntac/Variolink II) (n=40/cement). Push-out bond strength (PBS) was measured (1) after 24h water storage (non-aged group; n=20/cement) or (2) after artificial ageing with 5000 thermal cycles followed by 6 months humid storage (aged group; n=20/cement). Nonparametrical ANOVA and pairwise Wilcoxon rank-sum tests with Bonferroni-Holm adjustment were applied for statistical analysis. The significance level was set at alpha=0.05. In addition, failure mode and fracture pattern were analyzed by stereomicroscope and scanning electron microscopy. RESULTS: Whereas no statistically significant effect of storage condition was found (p=0.441), there was a significant effect of resin cement (p<0.0001): RelyX Unicem showed significantly higher PBS than the other cements. Syntac/Variolink II showed significantly higher PBS than SmartCEM2 (p<0.001). No significant differences were found between SpeedCEM, SmartCem2, and iCEM. The predominant failure mode was adhesive failure of cements at the dentin interface except for RelyX Unicem which in most cases showed cohesive failure in ceramic. SIGNIFICANCE: The resin cements showed marked differences in push-out bond strength when used for luting ceramic restorations to dentin. Variolink II with the etch-and-rinse adhesive Syntac did not perform better than three of the four self-adhesive resin cements tested.
Resumo:
Background: In protein sequence classification, identification of the sequence motifs or n-grams that can precisely discriminate between classes is a more interesting scientific question than the classification itself. A number of classification methods aim at accurate classification but fail to explain which sequence features indeed contribute to the accuracy. We hypothesize that sequences in lower denominations (n-grams) can be used to explore the sequence landscape and to identify class-specific motifs that discriminate between classes during classification. Discriminative n-grams are short peptide sequences that are highly frequent in one class but are either minimally present or absent in other classes. In this study, we present a new substitution-based scoring function for identifying discriminative n-grams that are highly specific to a class. Results: We present a scoring function based on discriminative n-grams that can effectively discriminate between classes. The scoring function, initially, harvests the entire set of 4- to 8-grams from the protein sequences of different classes in the dataset. Similar n-grams of the same size are combined to form new n-grams, where the similarity is defined by positive amino acid substitution scores in the BLOSUM62 matrix. Substitution has resulted in a large increase in the number of discriminatory n-grams harvested. Due to the unbalanced nature of the dataset, the frequencies of the n-grams are normalized using a dampening factor, which gives more weightage to the n-grams that appear in fewer classes and vice-versa. After the n-grams are normalized, the scoring function identifies discriminative 4- to 8-grams for each class that are frequent enough to be above a selection threshold. By mapping these discriminative n-grams back to the protein sequences, we obtained contiguous n-grams that represent short class-specific motifs in protein sequences. Our method fared well compared to an existing motif finding method known as Wordspy. We have validated our enriched set of class-specific motifs against the functionally important motifs obtained from the NLSdb, Prosite and ELM databases. We demonstrate that this method is very generic; thus can be widely applied to detect class-specific motifs in many protein sequence classification tasks. Conclusion: The proposed scoring function and methodology is able to identify class-specific motifs using discriminative n-grams derived from the protein sequences. The implementation of amino acid substitution scores for similarity detection, and the dampening factor to normalize the unbalanced datasets have significant effect on the performance of the scoring function. Our multipronged validation tests demonstrate that this method can detect class-specific motifs from a wide variety of protein sequence classes with a potential application to detecting proteome-specific motifs of different organisms.
Resumo:
Long-term follow up of patients with total hip arthroplasty (THA) revealed a marked deterioration of walking capacities in Charnley class B after postoperative year 4. We hypothesized that a specific group of patients, namely those with unilateral hip arthroplasty and an untreated but affected contralateral hip was responsible for this observation. Therefore, we conducted a study taking into consideration the two subclasses that make up Charnley class B: patients with unilateral THA and contralateral hip disease and patients with bilateral THA. A sample of 15,160 patients with 35,773 follow ups that were prospectively collected over 10 years was evaluated. The sample was categorized into four classes according to a new modified Charnley classification. Annual analyses of the proportion of patients with ambulation longer than 60 min were conducted. The traditionally labeled Charnley class B consists of two very different patient groups with respect to their walking capacities. Those with unilateral THA and contralateral hip disease have underaverage walking capacities and a deterioration of ambulation beginning 3 to 4 years after surgery. Those with bilateral THA have stable overaverage walking capacities similar to Charnley class A. An extension of the traditional Charnley classification is proposed, taking into account the two different patient groups in Charnley class B. The new fourth Charnley class consists of patients with bilateral THA and was labeled BB in order to express the presence of two artificial hip joints and to preserve the traditional classification A through C.
Resumo:
Aim of this paper is to evaluate the diagnostic contribution of various types of texture features in discrimination of hepatic tissue in abdominal non-enhanced Computed Tomography (CT) images. Regions of Interest (ROIs) corresponding to the classes: normal liver, cyst, hemangioma, and hepatocellular carcinoma were drawn by an experienced radiologist. For each ROI, five distinct sets of texture features are extracted using First Order Statistics (FOS), Spatial Gray Level Dependence Matrix (SGLDM), Gray Level Difference Method (GLDM), Laws' Texture Energy Measures (TEM), and Fractal Dimension Measurements (FDM). In order to evaluate the ability of the texture features to discriminate the various types of hepatic tissue, each set of texture features, or its reduced version after genetic algorithm based feature selection, was fed to a feed-forward Neural Network (NN) classifier. For each NN, the area under Receiver Operating Characteristic (ROC) curves (Az) was calculated for all one-vs-all discriminations of hepatic tissue. Additionally, the total Az for the multi-class discrimination task was estimated. The results show that features derived from FOS perform better than other texture features (total Az: 0.802+/-0.083) in the discrimination of hepatic tissue.
Resumo:
Die Bestandteile des Lean Thinking stellen für die moderne Produktion substantielle Prinzipien und Methoden für die Gestaltung effektiver wie auch gleichzeitig effizienter Systeme bereit. Ein unterstützendes Element bilden hier die Ansätze der Schlanken Logistik. Insbesondere die linienorientierte, variantenreiche Großserienproduktion im Automobilbau ist ein wesentlicher Treiber der Entwicklung. Die permanente Adaption auf mehrstufige Produktionssysteme, wie sie speziell im Druckmaschinenbau vorzufinden sind, erscheint dabei konsequent und sinnvoll. Der vorliegende Artikel stellt dabei wesentliche Voraussetzungen für die erfolgreiche Implementierung heraus und beschreibt die jeweiligen Interdependenzen. Schließlich werden ausgewählte Methoden mittels eines kennzahlenbasierten Messmodells anhand eines Fallbeispiels aus dem Druckmaschinenbau quantifiziert bewertet.
Resumo:
Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.