869 resultados para Markov chains hidden Markov models Viterbi algorithm Forward-Backward algorithm maximum likelihood
Resumo:
Non-Technical Summary Seafood CRC Project 2009/774. Harvest strategy evaluations and co-management for the Moreton Bay Trawl Fishery Principal Investigator: Dr Tony Courtney, Principal Fisheries Biologist Fisheries and Aquaculture, Agri-Science Queensland Department of Agriculture, Fisheries and Forestry Level B1, Ecosciences Precinct, Joe Baker St, Dutton Park, Queensland 4102 Email: tony.courtney@daff.qld.gov.au Project objectives: 1. Review the literature and data (i.e., economic, biological and logbook) relevant to the Moreton Bay trawl fishery. 2. Identify and prioritise management objectives for the Moreton Bay trawl fishery, as identified by the trawl fishers. 3. Undertake an economic analysis of Moreton Bay trawl fishery. 4. Quantify long-term changes to fishing power for the Moreton Bay trawl fishery. 5. Assess priority harvest strategies identified in 2 (above). Present results to, and discuss results with, Moreton Bay Seafood Industry Association (MBSIA), fishers and Fisheries Queensland. Note: Additional, specific objectives for 2 (above) were developed by fishers and the MBSIA after commencement of the project. These are presented in detail in section 5 (below). The project was an initiative of the MBSIA, primarily in response to falling profitability in the Moreton Bay prawn trawl fishery. The analyses were undertaken by a consortium of DAFF, CSIRO and University of Queensland researchers. This report adopted the Australian Standard Fish Names (http://www.fishnames.com.au/). Trends in catch and effort The Moreton Bay otter trawl fishery is a multispecies fishery, with the majority of the catch composed of Greasyback Prawns (Metapenaeus bennettae), Brown Tiger Prawns (Penaeus esculentus), Eastern King Prawns (Melicertus plebejus), squid (Uroteuthis spp., Sepioteuthis spp.), Banana Prawns (Fenneropenaeus merguiensis), Endeavour Prawns (Metapenaeus ensis, Metapenaeus endeavouri) and Moreton Bay bugs (Thenus parindicus). Other commercially important byproduct includes blue swimmer crabs (Portunus armatus), three-spot crabs (Portunus sanguinolentus), cuttlefish (Sepia spp.) and mantis shrimp (Oratosquilla spp.). Logbook catch and effort data show that total annual reported catch of prawns from the Moreton Bay otter trawl fishery has declined to 315 t in 2008 from a maximum of 901 t in 1990. The number of active licensed vessels participating in the fishery has also declined from 207 in 1991 to 57 in 2010. Similarly, fishing effort has fallen from a peak of 13,312 boat-days in 1999 to 3817 boat-days in 2008 – a 71% reduction. The declines in catch and effort are largely attributed to reduced profitability in the fishery due to increased operational costs and depressed prawn prices. The low prawn prices appear to be attributed to Australian aquacultured prawns and imported aquacultured vannamei prawns, displacing the markets for trawl-caught prawns, especially small species such as Greasyback Prawns which traditionally dominated landings in Moreton Bay. In recent years, the relatively high Australian dollar has resulted in reduced exports of Australian wild-caught prawns. This has increased supply on the domestic market which has also suppressed price increases. Since 2002, Brown Tiger Prawns have dominated annual reported landings in the Moreton Bay fishery. While total catch and effort in the bay have declined to historically low levels, the annual catch and catch rates of Brown Tiger Prawns have been at record highs in recent years. This appears to be at least partially attributed to the tiger prawn stock having recovered from excessive effort in previous decades. The total annual value of the Moreton Bay trawl fishery catch, including byproduct, is about $5 million, of which Brown Tiger Prawns account for about $2 million. Eastern King Prawns make up about 10% of the catch and are mainly caught in the bay from October to December as they migrate to offshore waters outside the bay where they contribute to a large mono-specific trawl fishery. Some of the Eastern King Prawns harvested in Moreton Bay may be growth overfished (i.e., caught below the size required to maximise yield or value), although the optimum size-at-capture was not determined in this study. Banana Prawns typically make up about 5% of the catch, but can exceed 20%, particularly following heavy rainfall. Economic analysis of the fishery From the economic survey, cash profits were, on average, positive for both fleet segments in both years of the survey. However, after the opportunity cost of capital and depreciation were taken into account, the residual owner-operator income was relatively low, and substantially lower than the average share of revenue paid to employed skippers. Consequently, owner-operators were earning less than their opportunity cost of their labour, suggesting that the fleets were economically unviable in the longer term. The M2 licensed fleet were, on average, earning similar boat cash profits as the T1/M1 fleet, although after the higher capital costs were accounted for the T1/M1 boats were earning substantially lower returns to owner-operator labour. The mean technical efficiency for the fleet as a whole was estimated to be 0.67. That is, on average, the boats were only catching 67 per cent of what was possible given their level of inputs (hours fished and hull units). Almost one-quarter of observations had efficiency scores above 0.8, suggesting a substantial proportion of the fleet are relatively efficient, but some are also relatively inefficient. Both fleets had similar efficiency distributions, with median technical efficiency score of 0.71 and 0.67 for the M2 and T1/M1 boats respectively. These scores are reasonably consistent with other studies of prawn trawl fleets in Australia, although higher average efficiency scores were found in the NSW prawn trawl fleet. From the inefficiency model, several factors were found to significantly influence vessel efficiency. These included the number of years of experience as skipper, the number of generations that the skipper’s family had been fishing and the number of years schooling. Skippers with more schooling were significantly more efficient than skippers with lower levels of schooling, consistent with other studies. Skippers who had been fishing longer were, in fact, less efficient than newer skippers. However, this was mitigated in the case of skippers whose family had been involved in fishing for several generations, consistent with other studies and suggesting that skill was passed through by families over successive generations. Both the linear and log-linear regression models of total fishing effort against the marginal profit per hour performed reasonably well, explaining between 70 and 84 per cent of the variation in fishing effort. As the models had different dependent variables (one logged and the other not logged) this is not a good basis for model choice. A better comparator is the square root of the mean square error (SMSE) expressed as a percentage of the mean total effort. On this criterion, both models performed very similarly. The linear model suggests that each additional dollar of average profits per hour in the fishery increases total effort by around 26 hours each month. From the log linear model, each percentage increase in profits per hour increases total fishing effort by 0.13 per cent. Both models indicate that economic performance is a key driver of fishing effort in the fishery. The effect of removing the boat-replacement policy is to increase individual vessel profitability, catch and effort, but the overall increase in catch is less than that removed by the boats that must exit the fishery. That is, the smaller fleet (in terms of boat numbers) is more profitable but the overall catch is not expected to be greater than before. This assumes, however, that active boats are removed, and that these were also taking an average level of catch. If inactive boats are removed, then catch of the remaining group as a whole could increase by between 14 and 17 per cent depending on the degree to which costs are reduced with the new boats. This is still substantially lower than historical levels of catch by the fleet. Fishing power analyses An analysis of logbook data from 1988 to 2010, and survey information on fishing gear, was performed to estimate the long-term variation in the fleet’s ability to catch prawns (known as fishing power) and to derive abundance estimates of the three most commercially important prawn species (i.e., Brown Tiger, Eastern King and Greasyback Prawns). Generalised linear models were used to explain the variation in catch as a function of effort (i.e., hours fished per day), vessel and gear characteristics, onboard technologies, population abundance and environmental factors. This analysis estimated that fishing power associated with Brown Tiger and Eastern King Prawns increased over the past 20 years by 10–30% and declined by approximately 10% for greasybacks. The density of tiger prawns was estimated to have almost tripled from around 0.5 kg per hectare in 1988 to 1.5 kg/ha in 2010. The density of Eastern King Prawns was estimated to have fluctuated between 1 and 2 kg per hectare over this time period, without any noticeable overall trend, while Greasyback Prawn densities were estimated to have fluctuated between 2 and 6 kg per hectare, also without any distinctive trend. A model of tiger prawn catches was developed to evaluate the impact of fishing on prawn survival rates in Moreton Bay. The model was fitted to logbook data using the maximum-likelihood method to provide estimates of the natural mortality rate (0.038 and 0.062 per week) and catchability (which can be defined as the proportion of the fished population that is removed by one unit of effort, in this case, estimated to be 2.5 ± 0.4 E-04 per boat-day). This approach provided a method for industry and scientists to develop together a realistic model of the dynamics of the fishery. Several aspects need to be developed further to make this model acceptable to industry. Firstly, there is considerable evidence to suggest that temperature influences prawn catchability. This ecological effect should be incorporated before developing meaningful harvest strategies. Secondly, total effort has to be allocated between each species. Such allocation of effort could be included in the model by estimating several catchability coefficients. Nevertheless, the work presented in this report is a stepping stone towards estimating essential fishery parameters and developing representative mathematical models required to evaluate harvest strategies. Developing a method that allowed an effective discussion between industry, management and scientists took longer than anticipated. As a result, harvest strategy evaluations were preliminary and only included the most valuable species in the fishery, Brown Tiger Prawns. Additional analyses and data collection, including information on catch composition from field sampling, migration rates and recruitment, would improve the modelling. Harvest strategy evaluations As the harvest strategy evaluations are preliminary, the following results should not be adopted for management purposes until more thorough evaluations are performed. The effects, of closing the fishery for one calendar month, on the annual catch and value of Brown Tiger Prawns were investigated. Each of the 12 months (i.e., January to December) was evaluated. The results were compared against historical records to determine the magnitude of gain or loss associated with the closure. Uncertainty regarding the trawl selectivity was addressed using two selectivity curves, one with a weight at 50% selection (S50%) of 7 g, based on research data, and a second with S50% of 14 g, put forward by industry. In both cases, it was concluded that any monthly closure after February would not be beneficial to the industry. The magnitude of the benefit of closing the fishery in either January or February was sensitive to which mesh selectivity curve that was assumed, with greater benefit achieved when the smaller selectivity curve (i.e., S50% = 7 g) was assumed. Using the smaller selectivity (S50% = 7 g), the expected increase in catch value was 10–20% which equates to $200,000 to $400,000 annually, while the larger selectivity curve (S50% = 14 g) suggested catch value would be improved by 5–10%, or $100,000 to $200,000. The harvest strategy evaluations showed that greater benefits, in the order of 30–60% increases in the tiger annual catch value, could have been obtained by closing the fishery early in the year when annual effort levels were high (i.e., > 10,000 boat-days). In recent years, as effort levels have declined (i.e., ~4000 boat-days annually), expected benefits from such closures are more modest. In essence, temporal closures offer greater benefit when fishing mortality rates are high. A spatial analysis of Brown Tiger Prawn catch and effort was also undertaken to obtain a better understanding of the prawn population dynamics. This indicated that, to improve profitability of the fishery, fishers could consider closing the fishery in the period from June to October, which is already a period of low profitability. This would protect the Brown Tiger Prawn spawning stock, increase catch rates of all species in the lucrative pre-Christmas period (November–December), and provide fishers with time to do vessel maintenance, arrange markets for the next season’s harvest, and, if they wish, work at other jobs. The analysis found that the instantaneous rate of total mortality (Z) for the March–June period did not vary significantly over the last two decades. As the Brown Tiger Prawn population in Moreton Bay has clearly increased over this time period, an interesting conclusion is that the instantaneous rate of natural mortality (M) must have increased, suggesting that tiger prawn natural mortality may be density-dependent at this time of year. Mortality rates of tiger prawns for June–October were found to have decreased over the last two decades, which has probably had a positive effect on spawning stocks in the October–November spawning period. Abiotic effects on the prawns The influence of air temperature, rainfall, freshwater flow, the southern oscillation index (SOI) and lunar phase on the catch rates of the four main prawn species were investigated. The analyses were based on over 200,000 daily logbook catch records over 23 years (i.e., 1988–2010). Freshwater flow was more influential than rainfall and SOI, and of the various sources of flow, the Brisbane River has the greatest volume and influence on Moreton Bay prawn catches. A number of time-lags were also considered. Flow in the preceding month prior to catch (i.e., 30 days prior, Logflow1_30) and two months prior (31–60 days prior, Logflow31_60) had strong positive effects on Banana Prawn catch rates. Average air temperature in the preceding 4-6 months (Temp121_180) also had a large positive effect on Banana Prawn catch rates. Flow in the month immediately preceding catch (Logflow1_30) had a strong positive influence on Greasyback Prawn catch rates. Air temperature in the preceding two months prior to catch (Temp1_60) had a large positive effect on Brown Tiger Prawn catch rates. No obvious or marked effects were detected for Eastern King Prawns, although interestingly, catch rates declined with increasing air temperature 4–6 months prior to catch. As most Eastern King Prawn catches in Moreton Bay occur in October to December, the results suggest catch rates decline with increasing winter temperatures. In most cases, the prawn catch rates declined with the waxing lunar phase (high luminance/full moon), and increased with the waning moon (low luminance/new moon). The SOI explains little additional variation in prawn catch rates (~ <2%), although its influence was higher for Banana Prawns. Extrapolating findings of the analyses to long-term climate change effects should be interpreted with caution. That said, the results are consistent with likely increases in abundance in the region for the two tropical species, Banana Prawns and Brown Tiger Prawns, as coastal temperatures rise. Conversely, declines in abundance could be expected for the two temperate species, Greasyback and Eastern King Prawns. Corporate management structures An examination of alternative governance systems was requested by the industry at one of the early meetings, particularly systems that may give them greater autonomy in decision making as well as help improve the marketing of their product. Consequently, a review of alternative management systems was undertaken, with a particular focus on the potential for self-management of small fisheries (small in terms of number of participants) and corporate management. The review looks at systems that have been implemented or proposed for other small fisheries internationally, with a particular focus on self-management as well as the potential benefits and challenges for corporate management. This review also highlighted particular opportunities for the Moreton Bay prawn fishery. Corporate management differs from other co-management and even self-management arrangements in that ‘ownership’ of the fishery is devolved to a company in which fishers and government are shareholders. The company manages the fishery as well as coordinates marketing to ensure that the best prices are received and that the catch taken meets the demands of the market. Coordinated harvesting will also result in increased profits, which are returned to fishers in the form of dividends. Corporate management offers many of the potential benefits of an individual quota system without formally implementing such a system. A corporate management model offers an advantage over a self-management model in that it can coordinate both marketing and management to take advantage of this unique geographical advantage. For such a system to be successful, the fishery needs to be relatively small and self- contained. Small in this sense is in terms of number of operators. The Moreton Bay prawn fishery satisfies these key conditions for a successful self-management and potentially corporate management system. The fishery is small both in terms of number of participants and geography. Unlike other fisheries that have progressed down the self-management route, the key market for the product from the Moreton Bay fishery is right at its doorstep. Corporate management also presents a number of challenges. First, it will require changes in the way fishers operate. In particular, the decision on when to fish and what to catch will be taken away from the individual and decided by the collective. Problems will develop if individuals do not join the corporation but continue to fish and market their own product separately. While this may seem an attractive option to fishers who believe they can do better independently, this is likely to be just a short- term advantage with an overall long-run cost to themselves as well as the rest of the industry. There are also a number of other areas that need further consideration, particularly in relation to the allocation of shares, including who should be allocated shares (e.g. just boat owners or also some employed skippers). Similarly, how harvesting activity is to be allocated by the corporation to the fishers. These are largely issues that cannot be answered without substantial consultation with those likely to be affected, and these groups cannot give these issues serious consideration until the point at which they are likely to become a reality. Given the current structure and complexity of the fishery, it is unlikely that such a management structure will be feasible in the short term. However, the fishery is a prime candidate for such a model, and development of such a management structure in the future should be considered as an option for the longer term.
Resumo:
This paper presents a maximum likelihood method for estimating growth parameters for an aquatic species that incorporates growth covariates, and takes into consideration multiple tag-recapture data. Individual variability in asymptotic length, age-at-tagging, and measurement error are also considered in the model structure. Using distribution theory, the log-likelihood function is derived under a generalised framework for the von Bertalanffy and Gompertz growth models. Due to the generality of the derivation, covariate effects can be included for both models with seasonality and tagging effects investigated. Method robustness is established via comparison with the Fabens, improved Fabens, James and a non-linear mixed-effects growth models, with the maximum likelihood method performing the best. The method is illustrated further with an application to blacklip abalone (Haliotis rubra) for which a strong growth-retarding tagging effect that persisted for several months was detected. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Fisheries management agencies around the world collect age data for the purpose of assessing the status of natural resources in their jurisdiction. Estimates of mortality rates represent a key information to assess the sustainability of fish stocks exploitation. Contrary to medical research or manufacturing where survival analysis is routinely applied to estimate failure rates, survival analysis has seldom been applied in fisheries stock assessment despite similar purposes between these fields of applied statistics. In this paper, we developed hazard functions to model the dynamic of an exploited fish population. These functions were used to estimate all parameters necessary for stock assessment (including natural and fishing mortality rates as well as gear selectivity) by maximum likelihood using age data from a sample of catch. This novel application of survival analysis to fisheries stock assessment was tested by Monte Carlo simulations to assert that it provided unbiased estimations of relevant quantities. The method was applied to the data from the Queensland (Australia) sea mullet (Mugil cephalus) commercial fishery collected between 2007 and 2014. It provided, for the first time, an estimate of natural mortality affecting this stock: 0.22±0.08 year −1 .
Resumo:
Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.
Resumo:
In this paper, we present a low-complexity, near maximum-likelihood (ML) performance achieving detector for large MIMO systems having tens of transmit and receive antennas. Such large MIMO systems are of interest because of the high spectral efficiencies possible in such systems. The proposed detection algorithm, termed as multistage likelihood-ascent search (M-LAS) algorithm, is rooted in Hopfield neural networks, and is shown to possess excellent performance as well as complexity attributes. In terms of performance, in a 64 x 64 V-BLAST system with 4-QAM, the proposed algorithm achieves an uncoded BER of 10(-3) at an SNR of just about 1 dB away from AWGN-only SISO performance given by Q(root SNR). In terms of coded BER, with a rate-3/4 turbo code at a spectral efficiency of 96 bps/Hz the algorithm performs close to within about 4.5 dB from theoretical capacity, which is remarkable in terms of both high spectral efficiency as well as nearness to theoretical capacity. Our simulation results show that the above performance is achieved with a complexity of just O(NtNt) per symbol, where N-t and N-tau denote the number of transmit and receive antennas.
Resumo:
Customer value has been identified as “the reason” for customers to patronize a firm, and as one of the fundamental blocks that market exchanges build upon. Despite the importance of customer value, it is often poorly defined, or seems to refer to different phenomena. This dissertation contributes to current marketing literature by subjecting the value concept to a critical investigation, and by clarifying its conceptual foundation. Based on the literature review, it is proposed that customer value can be divided into two separate, but interrelated aspects: value creation processes, and value outcome determination. This means that on one hand, it is possible to examine those activities through which value is created, and on the other hand, investigate how customers determine the value outcomes they receive. The results further show that customers may determine value in four different ways: value as a benefit/sacrifice ratio, as experience outcomes, as means-end chains, and value as phenomenological. In value as benefit/sacrifice ratio, customers are expected to calculate the ratio between service benefits (e.g. ease of use) and sacrifices (e.g. price). In value as experience outcomes, customers are suggested to experience multiple value components, such as functional, emotional, or social value. Customer value as means-ends chains in turn models value in terms of the relationships between service characteristics, use value, and desirable ends (e.g. social acceptance). Finally, value as phenomenological proposes that value emerges from lived, holistic experiences. The empirical papers investigate customer value in e-services, including online health care and mobile services, and show how value in e-service stems from the process and content quality, use context, and the service combination that a customer uses. In conclusion, marketers should understand that different value definitions generate different types of understanding of customer value. In addition, it is clear that studying value from several perspectives is useful, as it enables a richer understanding of value for the different actors. Finally, the interconnectedness between value creation and determination is surprisingly little researched, and this dissertation proposes initial steps towards understanding the relationship between the two.
Resumo:
The Thesis presents a state-space model for a basketball league and a Kalman filter algorithm for the estimation of the state of the league. In the state-space model, each of the basketball teams is associated with a rating that represents its strength compared to the other teams. The ratings are assumed to evolve in time following a stochastic process with independent Gaussian increments. The estimation of the team ratings is based on the observed game scores that are assumed to depend linearly on the true strengths of the teams and independent Gaussian noise. The team ratings are estimated using a recursive Kalman filter algorithm that produces least squares optimal estimates for the team strengths and predictions for the scores of the future games. Additionally, if the Gaussianity assumption holds, the predictions given by the Kalman filter maximize the likelihood of the observed scores. The team ratings allow probabilistic inference about the ranking of the teams and their relative strengths as well as about the teams’ winning probabilities in future games. The predictions about the winners of the games are correct 65-70% of the time. The team ratings explain 16% of the random variation observed in the game scores. Furthermore, the winning probabilities given by the model are concurrent with the observed scores. The state-space model includes four independent parameters that involve the variances of noise terms and the home court advantage observed in the scores. The Thesis presents the estimation of these parameters using the maximum likelihood method as well as using other techniques. The Thesis also gives various example analyses related to the American professional basketball league, i.e., National Basketball Association (NBA), and regular seasons played in year 2005 through 2010. Additionally, the season 2009-2010 is discussed in full detail, including the playoffs.
Resumo:
We present a low-complexity algorithm based on reactive tabu search (RTS) for near maximum likelihood (ML) detection in large-MIMO systems. The conventional RTS algorithm achieves near-ML performance for 4-QAM in large-MIMO systems. But its performance for higher-order QAM is far from ML performance. Here, we propose a random-restart RTS (R3TS) algorithm which achieves significantly better bit error rate (BER) performance compared to that of the conventional RTS algorithm in higher-order QAM. The key idea is to run multiple tabu searches, each search starting with a random initial vector and choosing the best among the resulting solution vectors. A criterion to limit the number of searches is also proposed. Computer simulations show that the R3TS algorithm achieves almost the ML performance in 16 x 16 V-BLAST MIMO system with 16-QAM and 64-QAM at significantly less complexities than the sphere decoder. Also, in a 32 x 32 V-BLAST MIMO system, the R3TS performs close to ML lower bound within 1.6 dB for 16-QAM (128 bps/Hz), and within 2.4 dB for 64-QAM (192 bps/Hz) at 10(-3) BER.
Resumo:
We consider the possibility of fingerprinting the presence of heavy additional Z' bosons that arise naturally in extensions of the standard model such as E-6 models and left-right symmetric models, through their mixing with the standard model Z boson. By considering a class of observables including total cross sections, energy distributions and angular distributions of decay leptons we find significant deviation from the standard model predictions for these quantities with right-handed electrons and left-handed positrons at root s= 800GeV. The deviations being less pronounced at smaller centre of mass energies as the models are already tightly constrained. Our work suggests that the ILC should have a strong beam polarization physics program particularly with these configurations. On the other hand, a forward backward asymmetry and lepton fraction in the backward direction are more sensitive to new physics with realistic polarization due to interesting interplay with the neutrino t-channel diagram. This process complements the study of fermion pair production processes that have been considered for discrimination between these models.
Resumo:
In this paper, we are concerned with low-complexity detection in large multiple-input multiple-output (MIMO) systems with tens of transmit/receive antennas. Our new contributions in this paper are two-fold. First, we propose a low-complexity algorithm for large-MIMO detection based on a layered low-complexity local neighborhood search. Second, we obtain a lower bound on the maximum-likelihood (ML) bit error performance using the local neighborhood search. The advantages of the proposed ML lower bound are i) it is easily obtained for MIMO systems with large number of antennas because of the inherent low complexity of the search algorithm, ii) it is tight at moderate-to-high SNRs, and iii) it can be tightened at low SNRs by increasing the number of symbols in the neighborhood definition. Interestingly, the proposed detection algorithm based on the layered local search achieves bit error performances which are quite close to this lower bound for large number of antennas and higher-order QAM. For e. g., in a 32 x 32 V-BLAST MIMO system, the proposed detection algorithm performs close to within 1.7 dB of the proposed ML lower bound at 10(-3) BER for 16-QAM (128 bps/Hz), and close to within 4.5 dB of the bound for 64-QAM (192 bps/Hz).
Resumo:
The Generalized Distributive Law (GDL) is a message passing algorithm which can efficiently solve a certain class of computational problems, and includes as special cases the Viterbi's algorithm, the BCJR algorithm, the Fast-Fourier Transform, Turbo and LDPC decoding algorithms. In this paper GDL based maximum-likelihood (ML) decoding of Space-Time Block Codes (STBCs) is introduced and a sufficient condition for an STBC to admit low GDL decoding complexity is given. Fast-decoding and multigroup decoding are the two algorithms used in the literature to ML decode STBCs with low complexity. An algorithm which exploits the advantages of both these two is called Conditional ML (CML) decoding. It is shown in this paper that the GDL decoding complexity of any STBC is upper bounded by its CML decoding complexity, and that there exist codes for which the GDL complexity is strictly less than the CML complexity. Explicit examples of two such families of STBCs is given in this paper. Thus the CML is in general suboptimal in reducing the ML decoding complexity of a code, and one should design codes with low GDL complexity rather than low CML complexity.
Resumo:
We propose an iterative data reconstruction technique specifically designed for multi-dimensional multi-color fluorescence imaging. Markov random field is employed (for modeling the multi-color image field) in conjunction with the classical maximum likelihood method. It is noted that, ill-posed nature of the inverse problem associated with multi-color fluorescence imaging forces iterative data reconstruction. Reconstruction of three-dimensional (3D) two-color images (obtained from nanobeads and cultured cell samples) show significant reduction in the background noise (improved signal-to-noise ratio) with an impressive overall improvement in the spatial resolution (approximate to 250 nm) of the imaging system. Proposed data reconstruction technique may find immediate application in 3D in vivo and in vitro multi-color fluorescence imaging of biological specimens. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4769058]
Resumo:
Latent variable methods, such as PLCA (Probabilistic Latent Component Analysis) have been successfully used for analysis of non-negative signal representations. In this paper, we formulate PLCS (Probabilistic Latent Component Segmentation), which models each time frame of a spectrogram as a spectral distribution. Given the signal spectrogram, the segmentation boundaries are estimated using a maximum-likelihood approach. For an efficient solution, the algorithm imposes a hard constraint that each segment is modelled by a single latent component. The hard constraint facilitates the solution of ML boundary estimation using dynamic programming. The PLCS framework does not impose a parametric assumption unlike earlier ML segmentation techniques. PLCS can be naturally extended to model coarticulation between successive phones. Experiments on the TIMIT corpus show that the proposed technique is promising compared to most state of the art speech segmentation algorithms.
Resumo:
Generalized spatial modulation (GSM) is a relatively new modulation scheme for multi-antenna wireless communications. It is quite attractive because of its ability to work with less number of transmit RF chains compared to traditional spatial multiplexing (V-BLAST system). In this paper, we show that, by using an optimum combination of number of transmit antennas (N-t) and number of transmit RF chains (N-rf), GSM can achieve better throughput and/or bit error rate (BER) than spatial multiplexing. First, we quantify the percentage savings in the number of transmit RF chains as well as the percentage increase in the rate achieved in GSM compared to spatial multiplexing; 18.75% savings in number of RF chains and 9.375% increase in rate are possible with 16 transmit antennas and 4-QAM modulation. A bottleneck, however, is the complexity of maximum-likelihood (ML) detection of GSM signals, particularly in large MIMO systems where the number of antennas is large. We address this detection complexity issue next. Specifically, we propose a Gibbs sampling based algorithm suited to detect GSM signals. The proposed algorithm yields impressive BER performance and complexity results. For the same spectral efficiency and number of transmit RF chains, GSM with the proposed detection algorithm achieves better performance than spatial multiplexing with ML detection.
Resumo:
We explore beyond-standard-model (BSM) physics signatures in the l + jets channel of the t (t) over bar pair production process at the Tevatron and the LHC. We study the effects of BSM physics scenarios on the top-quark polarization and on the kinematics of the decay leptons. To this end, we construct asymmetries using the lepton energy and angular distributions. Further, we find their correlations with the top polarization, net charge asymmetry and top forward-backward asymmetry. We show that when used together, these observables can help discriminate effectively between SM and different BSM scenarios, which can lead to varying degrees of top polarization at the Tevatron as well as the LHC. We use two types of colored mediator models to demonstrate the effectiveness of proposed observables, an s-channel axigluon and a u-channel diquark.