97 resultados para Two-dimensional cutting stocks
Resumo:
Three-dimensional information is much easier to understand than a set of two-dimensional images. Therefore a layman is thrilled by the pseudo-3D image taken in a scanning electron microscope (SEM) while, when seeing a transmission electron micrograph, his imagination is challenged. First approaches to gain insight in the third dimension were to make serial microtome sections of a region of interest (ROI) and then building a model of the object. Serial microtome sectioning is a tedious and skill-demanding work and therefore seldom done. In the last two decades with the increase of computer power, sophisticated display options, and the development of new instruments, an SEM with a built-in microtome as well as a focused ion beam scanning electron microscope (FIB-SEM), serial sectioning, and 3D analysis has become far easier and faster.Due to the relief like topology of the microtome trimmed block face of resin-embedded tissue, the ROI can be searched in the secondary electron mode, and at the selected spot, the ROI is prepared with the ion beam for 3D analysis. For FIB-SEM tomography, a thin slice is removed with the ion beam and the newly exposed face is imaged with the electron beam, usually by recording the backscattered electrons. The process, also called "slice and view," is repeated until the desired volume is imaged.As FIB-SEM allows 3D imaging of biological fine structure at high resolution of only small volumes, it is crucial to perform slice and view at carefully selected spots. Finding the region of interest is therefore a prerequisite for meaningful imaging. Thin layer plastification of biofilms offers direct access to the original sample surface and allows the selection of an ROI for site-specific FIB-SEM tomography just by its pronounced topographic features.
Resumo:
The infinite slope method is widely used as the geotechnical component of geomorphic and landscape evolution models. Its assumption that shallow landslides are infinitely long (in a downslope direction) is usually considered valid for natural landslides on the basis that they are generally long relative to their depth. However, this is rarely justified, because the critical length/depth (L/H) ratio below which edge effects become important is unknown. We establish this critical L/H ratio by benchmarking infinite slope stability predictions against finite element predictions for a set of synthetic two-dimensional slopes, assuming that the difference between the predictions is due to error in the infinite slope method. We test the infinite slope method for six different L/H ratios to find the critical ratio at which its predictions fall within 5% of those from the finite element method. We repeat these tests for 5000 synthetic slopes with a range of failure plane depths, pore water pressures, friction angles, soil cohesions, soil unit weights and slope angles characteristic of natural slopes. We find that: (1) infinite slope stability predictions are consistently too conservative for small L/H ratios; (2) the predictions always converge to within 5% of the finite element benchmarks by a L/H ratio of 25 (i.e. the infinite slope assumption is reasonable for landslides 25 times longer than they are deep); but (3) they can converge at much lower ratios depending on slope properties, particularly for low cohesion soils. The implication for catchment scale stability models is that the infinite length assumption is reasonable if their grid resolution is coarse (e.g. >25?m). However, it may also be valid even at much finer grid resolutions (e.g. 1?m), because spatial organization in the predicted pore water pressure field reduces the probability of short landslides and minimizes the risk that predicted landslides will have L/H ratios less than 25. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
The paracaspase MALT1 is pivotal in antigen receptor-mediated lymphocyte activation and lymphomagenesis. MALT1 contains a caspase-like domain, but it is unknown whether this domain is proteolytically active. Here we report that MALT1 had arginine-directed proteolytic activity that was activated after T cell stimulation, and we identify the signaling protein Bcl-10 as a MALT1 substrate. Processing of Bcl-10 after Arg228 was required for T cell receptor-induced cell adhesion to fibronectin. In contrast, MALT1 activity but not Bcl-10 cleavage was essential for optimal activation of transcription factor NF-kappaB and production of interleukin 2. Thus, the proteolytic activity of MALT1 is central to T cell activation, which suggests a possible target for the development of immunomodulatory or anticancer drugs
Resumo:
Proteomics has come a long way from the initial qualitative analysis of proteins present in a given sample at a given time ("cataloguing") to large-scale characterization of proteomes, their interactions and dynamic behavior. Originally enabled by breakthroughs in protein separation and visualization (by two-dimensional gels) and protein identification (by mass spectrometry), the discipline now encompasses a large body of protein and peptide separation, labeling, detection and sequencing tools supported by computational data processing. The decisive mass spectrometric developments and most recent instrumentation news are briefly mentioned accompanied by a short review of gel and chromatographic techniques for protein/peptide separation, depletion and enrichment. Special emphasis is placed on quantification techniques: gel-based, and label-free techniques are briefly discussed whereas stable-isotope coding and internal peptide standards are extensively reviewed. Another special chapter is dedicated to software and computing tools for proteomic data processing and validation. A short assessment of the status quo and recommendations for future developments round up this journey through quantitative proteomics.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
This study focused mainly on changes in the microtubule cytoskeleton in a transgenic mouse where beta-galactosidase fused to a truncated neurofilament subunit led to a decrease in neurofilament triplet protein expression and a loss in neurofilament assembly and abolished transport into neuronal processes in spinal cord and brain. Although all neurofilament subunits accumulated in neuronal cell bodies, our data suggest an increased solubility of all three subunits, rather than increased precipitation, and point to a perturbed filament assembly. In addition, reduced neurofilament phosphorylation may favor an increased filament degradation. The function of microtubules seemed largely unaffected, in that tubulin and microtubule-associated proteins (MAP) expression and their distribution were largely unchanged in transgenic animals. MAP1A was the only MAP with a reduced signal in spinal cord tissue, and differences in immunostaining in various brain regions corroborate a relationship between MAP1A and neurofilaments.
Resumo:
The trabecular bone score (TBS) is a gray-level textural metric that can be extracted from the two-dimensional lumbar spine dual-energy X-ray absorptiometry (DXA) image. TBS is related to bone microarchitecture and provides skeletal information that is not captured from the standard bone mineral density (BMD) measurement. Based on experimental variograms of the projected DXA image, TBS has the potential to discern differences between DXA scans that show similar BMD measurements. An elevated TBS value correlates with better skeletal microstructure; a low TBS value correlates with weaker skeletal microstructure. Lumbar spine TBS has been evaluated in cross-sectional and longitudinal studies. The following conclusions are based upon publications reviewed in this article: 1) TBS gives lower values in postmenopausal women and in men with previous fragility fractures than their nonfractured counterparts; 2) TBS is complementary to data available by lumbar spine DXA measurements; 3) TBS results are lower in women who have sustained a fragility fracture but in whom DXA does not indicate osteoporosis or even osteopenia; 4) TBS predicts fracture risk as well as lumbar spine BMD measurements in postmenopausal women; 5) efficacious therapies for osteoporosis differ in the extent to which they influence the TBS; 6) TBS is associated with fracture risk in individuals with conditions related to reduced bone mass or bone quality. Based on these data, lumbar spine TBS holds promise as an emerging technology that could well become a valuable clinical tool in the diagnosis of osteoporosis and in fracture risk assessment.
Resumo:
To study the structure of partially replicated plasmids, we cloned the Escherichia coli polar replication terminator TerE in its active orientation at different locations in the ColE1 vector pBR18. The resulting plasmids, pBR18-TerE@StyI and pBR18-TerE@EcoRI, were analyzed by neutral/neutral two-dimensional agarose gel electrophoresis and electron microscopy. Replication forks stop at the Ter-TUS complex, leading to the accumulation of specific replication intermediates with a mass 1.26 times the mass of non-replicating plasmids for pBR18-TerE@StyI and 1.57 times for pBR18-TerE@EcoRI. The number of knotted bubbles detected after digestion with ScaI and the number and electrophoretic mobility of undigested partially replicated topoisomers reflect the changes in plasmid topology that occur in DNA molecules replicated to different extents. Exposure to increasing concentrations of chloroquine or ethidium bromide revealed that partially replicated topoisomers (CCCRIs) do not sustain positive supercoiling as efficiently as their non-replicating counterparts. It was suggested that this occurs because in partially replicated plasmids a positive DeltaLk is absorbed by regression of the replication fork. Indeed, we showed by electron microscopy that, at least in the presence of chloroquine, some of the CCCRIs of pBR18-Ter@StyI formed Holliday-like junction structures characteristic of reversed forks. However, not all the positive supercoiling was absorbed by fork reversal in the presence of high concentrations of ethidium bromide.
Resumo:
Understanding the genetic structure of human populations is of fundamental interest to medical, forensic and anthropological sciences. Advances in high-throughput genotyping technology have markedly improved our understanding of global patterns of human genetic variation and suggest the potential to use large samples to uncover variation among closely spaced populations. Here we characterize genetic variation in a sample of 3,000 European individuals genotyped at over half a million variable DNA sites in the human genome. Despite low average levels of genetic differentiation among Europeans, we find a close correspondence between genetic and geographic distances; indeed, a geographical map of Europe arises naturally as an efficient two-dimensional summary of genetic variation in Europeans. The results emphasize that when mapping the genetic basis of a disease phenotype, spurious associations can arise if genetic structure is not properly accounted for. In addition, the results are relevant to the prospects of genetic ancestry testing; an individual's DNA can be used to infer their geographic origin with surprising accuracy-often to within a few hundred kilometres.
Resumo:
We present strategies for chemical shift assignments of large proteins by magic-angle spinning solid-state NMR, using the 21-kDa disulfide-bond-forming enzyme DsbA as prototype. Previous studies have demonstrated that complete de novo assignments are possible for proteins up to approximately 17 kDa, and partial assignments have been performed for several larger proteins. Here we show that combinations of isotopic labeling strategies, high field correlation spectroscopy, and three-dimensional (3D) and four-dimensional (4D) backbone correlation experiments yield highly confident assignments for more than 90% of backbone resonances in DsbA. Samples were prepared as nanocrystalline precipitates by a dialysis procedure, resulting in heterogeneous linewidths below 0.2 ppm. Thus, high magnetic fields, selective decoupling pulse sequences, and sparse isotopic labeling all improved spectral resolution. Assignments by amino acid type were facilitated by particular combinations of pulse sequences and isotopic labeling; for example, transferred echo double resonance experiments enhanced sensitivity for Pro and Gly residues; [2-(13)C]glycerol labeling clarified Val, Ile, and Leu assignments; in-phase anti-phase correlation spectra enabled interpretation of otherwise crowded Glx/Asx side-chain regions; and 3D NCACX experiments on [2-(13)C]glycerol samples provided unique sets of aromatic (Phe, Tyr, and Trp) correlations. Together with high-sensitivity CANCOCA 4D experiments and CANCOCX 3D experiments, unambiguous backbone walks could be performed throughout the majority of the sequence. At 189 residues, DsbA represents the largest monomeric unit for which essentially complete solid-state NMR assignments have so far been achieved. These results will facilitate studies of nanocrystalline DsbA structure and dynamics and will enable analysis of its 41-kDa covalent complex with the membrane protein DsbB, for which we demonstrate a high-resolution two-dimensional (13)C-(13)C spectrum.
Resumo:
Mutations in kerato-epithelin are responsible for a group of hereditary cornea-specific deposition diseases, 5q31-linked corneal dystrophies. These conditions are characterized by progressive accumulation of protein deposits of different ultrastructure. Herein, we studied the corneas with mutations at kerato-epithelin residue Arg-124 resulting in amyloid (R124C), non-amyloid (R124L), and a mixed pattern of deposition (R124H). We found that aggregated kerato-epithelin comprised all types of pathological deposits. Each mutation was associated with characteristic changes of protein turnover in corneal tissue. Amyloidogenesis in R124C corneas was accompanied by the accumulation of N-terminal kerato-epithelin fragments, whereby species of 44 kDa were the major constituents of amyloid fibrils. R124H corneas with prevailing non-amyloid inclusions showed accumulation of a new 66-kDa species altogether with the full-size 68-kDa form. Finally, in R124L cornea with non amyloid deposits, we found only the accumulation of the 68-kDa form. Two-dimensional gels revealed mutation-specific changes in the processing of the full-size protein in all affected corneas. It appears that substitutions at the same residue (Arg-124) result in cornea-specific deposition of kerato-epithelin via distinct aggregation pathways each involving altered turnover of the protein in corneal tissue.
Resumo:
An impaired glutathione (GSH) synthesis was observed in several multifactorial diseases, including schizophrenia and myocardial infarction. Genetic studies revealed an association between schizophrenia and a GAG trinucleotide repeat (TNR) polymorphism in the catalytic subunit (GCLC) of the glutamate cysteine ligase (GCL). Disease-associated genotypes of this polymorphism correlated with a decrease in GCLC protein expression, GCL activity and GSH content. To clarify consequences of a decreased GCL activity at the proteome level, three schizophrenia patients and three controls have been selected based on the GCLC GAG TNR polymorphism. Fibroblast cultures were obtained by skin biopsy and were challenged with tert-butylhydroquinone (t-BHQ), a substance known to induce oxidative stress. Proteome changes were analyzed by two dimensional gel electrophoresis (2-DE) and results revealed 10 spots that were upregulated in patients following t-BHQ treatment, but not in controls. Nine corresponding proteins could be identified by MALDI mass spectrometry and these proteins are involved in various cellular functions, including energy metabolism, oxidative stress response, and cytoskeletal reorganization. In conclusion, skin fibroblasts of subjects with an impaired GSH synthesis showed an altered proteome reaction in response to oxidative stress. Furthermore, the study corroborates the use of fibroblasts as an additional mean to study vulnerability factors of psychiatric diseases.
Resumo:
Two-dimensional agarose gel electrophoresis, psoralen cross-linking, and electron microscopy were used to study the effects of positive supercoiling on fork reversal in isolated replication intermediates of bacterial DNA plasmids. The results obtained demonstrate that the formation of Holliday-like junctions at both forks of a replication bubble creates a topological constraint that prevents further regression of the forks. We propose that this topological locking of replication intermediates provides a biological safety mechanism that protects DNA molecules against extensive fork reversals.
Resumo:
The Rational-Experiential Inventory REI (Pacini and Epstein, 1999) is a self-administered test comprising two scales measuring the attitude of respondents towards two thinking styles respectively referred to as the rational and the experiential thinking styles. Two validation studies were conducted using a new French-language version of the REI. The first study confirms the validity of the French translation. The second study, which is concerned with the REI's construct validity, assesses the questionnaire's capacity to discriminate between a group of smokers and a group of non-smokers. Both studies give generally satisfactory results. In particular, the advantages of using the two-dimensional REI rather than the better known Need For Cognition scale (Cacioppo & Petty, 1982) are made quite clear.
Resumo:
We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.