884 resultados para network metabolismo flux analysis markov recon


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.

It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.

The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement.

Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes.

The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an “enhancement factor” to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth.

Part II

Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, \lambda_{R}, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.

Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.

Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Raman spectroscopy on single, living epithelial cells captured in a laser trap is shown to have diagnostic power over colorectal cancer. This new single-cell technology comprises three major components: primary culture processing of human tissue samples to produce single-cell suspensions, Raman detection on singly trapped cells, and diagnoses of the cells by artificial neural network classifications. it is compared with DNA flow cytometry for similarities and differences. Its advantages over tissue Raman spectroscopy are also discussed. In the actual construction of a diagnostic model for colorectal cancer, real patient data were taken to generate a training set of 320 Raman spectra and, a test set of 80. By incorporating outlier corrections to a conventional binary neural classifier, our network accomplished significantly better predictions than logistic regressions, with sensitivity improved from 77.5% to 86.3% and specificity improved from 81.3% to 86.3% for the training set and moderate improvements for the test set. Most important, the network approach enables a sensitivity map analysis to quantitate the relevance of each Raman band to the normal-to-cancer transform at the cell level. Our technique has direct clinic applications for diagnosing cancers and basic science potential in the study of cell dynamics of carcinogenesis. (C) 2007 Society of Photo-Optical Instrumentation Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.

Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.

Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.

When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.

The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Raman spectroscopy on single, living epithelial cells captured in a laser trap is shown to have diagnostic power over colorectal cancer. This new single-cell technology comprises three major components: primary culture processing of human tissue samples to produce single-cell suspensions, Raman detection on singly trapped cells, and diagnoses of the cells by artificial neural network classifications. it is compared with DNA flow cytometry for similarities and differences. Its advantages over tissue Raman spectroscopy are also discussed. In the actual construction of a diagnostic model for colorectal cancer, real patient data were taken to generate a training set of 320 Raman spectra and, a test set of 80. By incorporating outlier corrections to a conventional binary neural classifier, our network accomplished significantly better predictions than logistic regressions, with sensitivity improved from 77.5% to 86.3% and specificity improved from 81.3% to 86.3% for the training set and moderate improvements for the test set. Most important, the network approach enables a sensitivity map analysis to quantitate the relevance of each Raman band to the normal-to-cancer transform at the cell level. Our technique has direct clinic applications for diagnosing cancers and basic science potential in the study of cell dynamics of carcinogenesis. (C) 2007 Society of Photo-Optical Instrumentation Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Indivíduos que permanecem longo tempo em cadeira de rodas apresentam importante perda de massa óssea, principalmente nos membros inferiores, possivelmente agravada pela baixa ingestão de cálcio dietético e pelo inadequado estado nutricional de vitamina D. O exercício físico pode contribuir para a manutenção ou aumento da massa óssea em diferentes populações e nos indivíduos com lesão medular pode contribuir para atenuar a perda de massa óssea. O objetivo do presente estudo foi avaliar a influência da prática regular de exercício físico sobre a adequação da massa óssea, indicadores bioquímicos do metabolismo ósseo e estado nutricional de vitamina D em indivíduos com lesão medular cervical há pelo menos um ano. Em vinte e cinco homens de 19 a 56 anos sendo 15 fisicamente ativos e 10 sedentários, foi realizada análise sérica de cálcio, PTH, 25(OH)D, IGF-1, osteocalcina e NTx. As medidas do conteúdo mineral ósseo, densidade mineral óssea (DMO), massa magra e massa gorda foram realizadas por DXA. A pigmentação da pele (constitutiva e por bronzeamento) foi determinada por colorimetria com o objetivo de investigar sua influência sobre o estado de vitamina D. A ingestão habitual de cálcio foi registrada em um questionário de frequência alimentar direcionado para alimentos fonte. As comparações entre os dois grupos foram realizadas pela aplicação do Teste t de Student exceto para as variáveis ósseas que foram realizadas após ajustes pela massa corporal total, tempo de lesão e ingestão de cálcio utilizando-se análise de co-variância. Associações entre as variáveis estudadas foram avaliadas através de análise de correlação de Pearson. Valores de p<0.05 foram considerados significativos. Não foram observadas diferenças estatisticamente significativas entre os grupos para nenhuma variável óssea com exceção do z-score da DMO da coluna lombar, que foi significativamente maior no grupo de indivíduos sedentários (0,9 1,7 vs -0,7 0,8; p<0,05). No entanto, entre os indivíduos ativos, aqueles que iniciaram a prática de exercício físico com menos tempo decorrido após a lesão apresentaram maior DMO do fêmur (r=-0,60; p<0,05). Nos indivíduos ativos, a freqüência do exercício apresentou associação negativa com a concentração sérica de i-PTH (r = -0,50; p =0,05) e positiva com a concentração de 25(OH)D (r= 0,58; p <0,05). Após ajustes pela massa corporal total e tempo de lesão foram observadas associações positivas entre a ingestão diária de cálcio e z-score da DMO da coluna lombar (r = 0,73 e p <0,01) e DMO do rádio (r = 0,56 e p <0,05). Os resultados do presente estudo apontam para um efeito benéfico do exercício físico sobre a massa óssea e o perfil hormonal relacionado ao metabolismo ósseo. O início da prática regular de exercício físico o quanto antes após a lesão parece contribuir para atenuar a perda de massa óssea nos membros inferiores. Além disso, os resultados deste estudo sugerem uma possível potencialização do efeito osteogênico do exercício físico quando combinado a uma adequada ingestão de cálcio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coherent ecological networks (EN) composed of core areas linked by ecological corridors are being developed worldwide with the goal of promoting landscape connectivity and biodiversity conservation. However, empirical assessment of the performance of EN designs is critical to evaluate the utility of these networks to mitigate effects of habitat loss and fragmentation. Landscape genetics provides a particularly valuable framework to address the question of functional connectivity by providing a direct means to investigate the effects of landscape structure on gene flow. The goals of this study are (1) to evaluate the landscape features that drive gene flow of an EN target species (European pine marten), and (2) evaluate the optimality of a regional EN design in providing connectivity for this species within the Basque Country (North Spain). Using partial Mantel tests in a reciprocal causal modeling framework we competed 59 alternative models, including isolation by distance and the regional EN. Our analysis indicated that the regional EN was among the most supported resistance models for the pine marten, but was not the best supported model. Gene flow of pine marten in northern Spain is facilitated by natural vegetation, and is resisted by anthropogenic landcover types and roads. Our results suggest that the regional EN design being implemented in the Basque Country will effectively facilitate gene flow of forest dwelling species at regional scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

FSodium phosphate tellurite glasses in the system (NaPO3)(x)(TeO2)(1-x) were prepared and structurally characterized by thermal analysis, vibrational spectroscopy, X-ray photoelectron spectroscopy (XPS) and a variety of complementary solid-state nuclear magnetic resonance (NMR) techniques. Unlike the situation in other mixed-network-former glasses, the interaction between the two network formers tellurium oxide and phosphorus oxide produces no new structural units, and no sharing of the network modifier Na2O takes place. The glass structure can be regarded as a network of interlinked metaphosphate-type P(2) tetrahedral and TeO4/2 antiprismotic units. The combined interpretation of the O 1s XPS data and the P-31 solid-state NMR spectra presents clear quantitative evidence for a nonstatistical connectivity distribution. Rather the formation of homootomic P-O-P and Te-O-Te linkages is favored over mixed P-O-Te connectivities. As a consequence of this chemical segregation effect, the spatial sodium distribution is not random, as also indicated by a detailed analysis of P-31/No-23 rotational echo double-resonance (REDOR) experiments. ACHTUNGTRENUNG(TeO2)1 x were prepared and structurally characterized by thermal analysis,vibrat ional spectroscopy,X-ray photoelectron spectroscopy (XPS) and a variety of complementary solid-state nuclear magnetic resonance (NMR) techniques. Unlike the situation in other mixed-network-former glasses,the interaction between the two network formers tellurium oxide and phosphorus oxide produces no new structural units,and no sharing of the network modifier Na2O takes place. The glass structure can be regarded as a network of interlinked metaphosphate-type P(2) tetrahedral and TeO4/2 antiprismatic units. The combined interpretation of the O 1s XPS data and the 31P solid-state NMR spectra presents clear quantitative evidence for a nonstatistical connectivity distribution. Rather,the formation of homoatomic P O P and Te O Te linkages is favored over mixed P O Te connectivities. As a consequence of this chemical segregation effect,the spatial sodium distribution is not random,as also indicated by a detailed analysis of 31P/23Na rotational echo double-resonance (REDOR) experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A presente tese tem por finalidade refletir sobre a contradição e o conflito que vive o movimento sindical brasileiro diante de um processo de adaptação ao sócio-metabolismo do capital e sua lógica. Considera-se que esse processo de adaptação, burocratização, sustenta-se no transformismo e no pragmatismo, que foi gradativamente incorporado por grande parcela dos dirigentes sindicais, fruto do abandono das ideologias socialistas e da perspectiva estratégica de ruptura com o modo de produção capitalista, e de uma práxis de que é possível reformar e humanizar o capital e o capitalismo. Tratou-se, assim, de analisar e compreender esse processo, que está dialeticamente em disputa na CUT, evidenciando os conflitantes projetos estratégicos, em debate no seu interior, e fora dela. No Brasil, o movimento sindical não ficou imune a nova sociabilidade capitalista e à ofensiva neoliberal. Essa adaptação vai alterando profundamente suas concepções e orientações políticas. Verificamos uma tendência cada vez mais presente de que a Central que nasceu com fortes elementos de contestação à ordem capitalista, de defesa dos interesses históricos dos trabalhadores, como a luta pelo socialismo e pela emancipação dos trabalhadores, se transforma em ferramenta política e organizativa para manutenção do capital e seu projeto econômico e societal. Num sentido macro, com base nesses postulados teórico, num primeiro momento, foi realizada uma revisão da literatura existente sobre o tema, historicizando os processos sócio-históricos e políticos que consubstanciaram e metamorfosearam o objeto, e analisando as transformações e reconfigurações do modo de produção capitalista, os condicionantes históricos e políticos dessas mudanças na ideologia, organização e prática sindical em seu setor majoritário, a CUT. Como fonte foram tomados os materiais produzidos pelos sujeitos coletivos CUT e outras centrais sindicais (cadernos de análises políticas, teses de congressos sindicais, textos, relatórios, publicações em livros e revistas) e outras análises e publicações produzidas por estudiosos do tema. Acompanhamos e participamos dos congressos sindicais, encontros temáticos, seminários, cursos de formação, planejamento, elaboração e desenvolvimento de projetos. As concepções políticas e ideológicas das lideranças estão presentes, representadas direta ou indiretamente, nas teses congressuais, resoluções políticas, e textos de análise das centrais e suas distintas tendências que as compõe, e que foram fontes fundamentais de referências para análise no nosso trabalho. Assim, analisamos, a partir dos referenciais teóricos clássicos e contemporâneos, o papel histórico e imediato dos sindicatos e do movimento sindical, buscando contextualiza-lo historicamente, sua necessidade, contradições, possibilidades e limites, como instrumento de classe na luta pela construção de um projeto societário de transformação econômica, social e política, na perspectiva da emancipação humana. Investigamos e visamos compreender as determinações e os condicionantes sócio políticos que impactaram o mundo do trabalho e os sindicatos. E Além disto, discutimos as alternativas, tarefas e desafios aos sindicatos e ao movimento sindical