980 resultados para Minimum distance
Resumo:
The analysis of modulation schemes for the physical layer network-coded two way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. Depending on the signal set used at the end nodes, the minimum distance of the effective constellation seen at the relay becomes zero for a finite number of channel fade states referred as the singular fade states. The singular fade states fall into the following two classes: (i) the ones which are caused due to channel outage and whose harmful effect cannot be mitigated by adaptive network coding called the non-removable singular fade states and (ii) the ones which occur due to the choice of the signal set and whose harmful effects can be removed called the removable singular fade states. In this paper, we derive an upper bound on the average end-to-end Symbol Error Rate (SER), with and without adaptive network coding at the relay, for a Rician fading scenario. It is shown that without adaptive network coding, at high Signal to Noise Ratio (SNR), the contribution to the end-to-end SER comes from the following error events which fall as SNR-1: the error events associated with the removable and nonremovable singular fade states and the error event during the BC phase. In contrast, for the adaptive network coding scheme, the error events associated with the removable singular fade states fall as SNR-2, thereby providing a coding gain over the case when adaptive network coding is not used. Also, it is shown that for a Rician fading channel, the error during the MA phase dominates over the error during the BC phase. Hence, adaptive network coding, which improves the performance during the MA phase provides more gain in a Rician fading scenario than in a Rayleigh fading scenario. Furthermore, it is shown that for large Rician factors, among those removable singular fade states which have the same magnitude, those which have the least absolute value of the phase - ngle alone contribute dominantly to the end-to-end SER and it is sufficient to remove the effect of only such singular fade states.
Resumo:
Electrical Impedance Tomography (EIT) is a computerized medical imaging technique which reconstructs the electrical impedance images of a domain under test from the boundary voltage-current data measured by an EIT electronic instrumentation using an image reconstruction algorithm. Being a computed tomography technique, EIT injects a constant current to the patient's body through the surface electrodes surrounding the domain to be imaged (Omega) and tries to calculate the spatial distribution of electrical conductivity or resistivity of the closed conducting domain using the potentials developed at the domain boundary (partial derivative Omega). Practical phantoms are essentially required to study, test and calibrate a medical EIT system for certifying the system before applying it on patients for diagnostic imaging. Therefore, the EIT phantoms are essentially required to generate boundary data for studying and assessing the instrumentation and inverse solvers a in EIT. For proper assessment of an inverse solver of a 2D EIT system, a perfect 2D practical phantom is required. As the practical phantoms are the assemblies of the objects with 3D geometries, the developing of a practical 2D-phantom is a great challenge and therefore, the boundary data generated from the practical phantoms with 3D geometry are found inappropriate for assessing a 2D inverse solver. Furthermore, the boundary data errors contributed by the instrumentation are also difficult to separate from the errors developed by the 3D phantoms. Hence, the errorless boundary data are found essential to assess the inverse solver in 2D EIT. In this direction, a MatLAB-based Virtual Phantom for 2D EIT (MatVP2DEIT) is developed to generate accurate boundary data for assessing the 2D-EIT inverse solvers and the image reconstruction accuracy. MatVP2DEIT is a MatLAB-based computer program which simulates a phantom in computer and generates the boundary potential data as the outputs by using the combinations of different phantom parameters as the inputs to the program. Phantom diameter, inhomogeneity geometry (shape, size and position), number of inhomogeneities, applied current magnitude, background resistivity, inhomogeneity resistivity all are set as the phantom variables which are provided as the input parameters to the MatVP2DEIT for simulating different phantom configurations. A constant current injection is simulated at the phantom boundary with different current injection protocols and boundary potential data are calculated. Boundary data sets are generated with different phantom configurations obtained with the different combinations of the phantom variables and the resistivity images are reconstructed using EIDORS. Boundary data of the virtual phantoms, containing inhomogeneities with complex geometries, are also generated for different current injection patterns using MatVP2DEIT and the resistivity imaging is studied. The effect of regularization method on the image reconstruction is also studied with the data generated by MatVP2DEIT. Resistivity images are evaluated by studying the resistivity parameters and contrast parameters estimated from the elemental resistivity profiles of the reconstructed phantom domain. Results show that the MatVP2DEIT generates accurate boundary data for different types of single or multiple objects which are efficient and accurate enough to reconstruct the resistivity images in EIDORS. The spatial resolution studies show that, the resistivity imaging conducted with the boundary data generated by MatVP2DEIT with 2048 elements, can reconstruct two circular inhomogeneities placed with a minimum distance (boundary to boundary) of 2 mm. It is also observed that, in MatVP2DEIT with 2048 elements, the boundary data generated for a phantom with a circular inhomogeneity of a diameter less than 7% of that of the phantom domain can produce resistivity images in EIDORS with a 1968 element mesh. Results also show that the MatVP2DEIT accurately generates the boundary data for neighbouring, opposite reference and trigonometric current patterns which are very suitable for resistivity reconstruction studies. MatVP2DEIT generated data are also found suitable for studying the effect of the different regularization methods on reconstruction process. Comparing the reconstructed image with an original geometry made in MatVP2DEIT, it would be easier to study the resistivity imaging procedures as well as the inverse solver performance. Using the proposed MatVP2DEIT software with modified domains, the cross sectional anatomy of a number of body parts can be simulated in PC and the impedance image reconstruction of human anatomy can be studied.
Resumo:
Regenerating codes and codes with locality are two coding schemes that have recently been proposed, which in addition to ensuring data collection and reliability, also enable efficient node repair. In a situation where one is attempting to repair a failed node, regenerating codes seek to minimize the amount of data downloaded for node repair, while codes with locality attempt to minimize the number of helper nodes accessed. This paper presents results in two directions. In one, this paper extends the notion of codes with locality so as to permit local recovery of an erased code symbol even in the presence of multiple erasures, by employing local codes having minimum distance >2. An upper bound on the minimum distance of such codes is presented and codes that are optimal with respect to this bound are constructed. The second direction seeks to build codes that combine the advantages of both codes with locality as well as regenerating codes. These codes, termed here as codes with local regeneration, are codes with locality over a vector alphabet, in which the local codes themselves are regenerating codes. We derive an upper bound on the minimum distance of vector-alphabet codes with locality for the case when their constituent local codes have a certain uniform rank accumulation property. This property is possessed by both minimum storage regeneration (MSR) and minimum bandwidth regeneration (MBR) codes. We provide several constructions of codes with local regeneration which achieve this bound, where the local codes are either MSR or MBR codes. Also included in this paper, is an upper bound on the minimum distance of a general vector code with locality as well as the performance comparison of various code constructions of fixed block length and minimum distance.
Resumo:
In this paper, we study codes with locality that can recover from two erasures via a sequence of two local, parity-check computations. By a local parity-check computation, we mean recovery via a single parity-check equation associated with small Hamming weight. Earlier approaches considered recovery in parallel; the sequential approach allows us to potentially construct codes with improved minimum distance. These codes, which we refer to as locally 2-reconstructible codes, are a natural generalization along one direction, of codes with all-symbol locality introduced by Gopalan et al, in which recovery from a single erasure is considered. By studying the generalized Hamming weights of the dual code, we derive upper bounds on the minimum distance of locally 2-reconstructible codes and provide constructions for a family of codes based on Turan graphs, that are optimal with respect to this bound. The minimum distance bound derived here is universal in the sense that no code which permits all-symbol local recovery from 2 erasures can have larger minimum distance regardless of approach adopted. Our approach also leads to a new bound on the minimum distance of codes with all-symbol locality for the single-erasure case.
Resumo:
This paper estimates a standard version of the New Keynesian monetary (NKM) model under alternative specifications of the monetary policy rule using U.S. and Eurozone data. The estimation procedure implemented is a classical method based on the indirect inference principle. An unrestricted VAR is considered as the auxiliary model. On the one hand, the estimation method proposed overcomes some of the shortcomings of using a structural VAR as the auxiliary model in order to identify the impulse response that defines the minimum distance estimator implemented in the literature. On the other hand, by following a classical approach we can further assess the estimation results found in recent papers that follow a maximum-likelihood Bayesian approach. The estimation results show that some structural parameter estimates are quite sensitive to the specification of monetary policy. Moreover, the estimation results in the U.S. show that the fit of the NKM under an optimal monetary plan is much worse than the fit of the NKM model assuming a forward-looking Taylor rule. In contrast to the U.S. case, in the Eurozone the best fit is obtained assuming a backward-looking Taylor rule, but the improvement is rather small with respect to assuming either a forward-looking Taylor rule or an optimal plan.
Resumo:
The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.
The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.
The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.
The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.
Resumo:
O comportamento espacial dos indivíduos é um componente chave para se entender a dinâmica de população dos organismos e esclarecer o potencial de migração e dispersão das espécies. Vários fatores afetam a atividade de locomoção de moluscos terrestres, como temperatura, luz, umidade, época do ano, tamanho da concha, sexo, estratégia reprodutiva, idade, densidade de coespecíficos e disponibilidade de alimento. Um dos métodos usados para estudar deslocamento de gastrópodes terrestres é o de marcação-recaptura. Gastrópodes terrestres se prestam a este tipo de estudo por causa de (1) seu reduzido tamanho, (2) fácil manejo, (3) fácil captura e (4) pequenas distâncias de deslocamento e, consequentemente, reduzidas áreas de vida. Estes organismos servem como modelo para o estudo de ecologia espacial e dispersão. Estudos de população, investigando o uso do espaço, a distribuição espacial, a densidade populacional e a área de vida são escassos para moluscos terrestres e ainda mais raros em áreas naturais tropicais. Nosso objeto de estudo é Hypselartemon contusulus (Férussac, 1827), um molusco terrestre carnívoro, da família Streptaxidae, muito abundante na serrapilheira, em trechos planos de mata secundária na Trilha da Parnaioca, Ilha Grande, Rio de Janeiro. A espécie é endêmica para o estado do Rio de Janeiro. Seu tamanho é de até 7,2 mm de altura, apresentando 6 a 7 voltas. Neste trabalho estudamos as variáveis temperatura ambiente, temperatura do solo, umidade do ar, luminosidade, profundidade do folhiço, tamanho do animal, densidade de co-específicos e densidade de presas, relacionando estes dados ecológicos ao deslocamento observado em Hypselartemon contusulus. Uma das hipóteses de trabalho é que estas variáveis afetam seu deslocamento. O trabalho foi realizado na Ilha Grande, situada ao sul do Estado do Rio de Janeiro, no município de Angra dos Reis. Os animais foram capturados e marcados com um código individual pintado na concha com corretor ortográfico líquido e caneta nanquim. As distâncias de deslocamento, em cm, foram registradas medindo-se as distâncias entre marcadores subsequentes. Os resultados encontrados indicam que o método utilizado é eficaz para marcar individualmente Hypselartemon contusulus em estudos de médio prazo (até nove meses). Sugerimos o uso deste método de marcação para estudos com gastrópodes terrestres ameaçados de extinção, como algumas espécies das famílias Bulimulidae, Megalobulimidae, Streptaxidae e Strophocheilidae. Hypselartemon contusulus não mantém uma distância mínima de seus vizinhos, é ativo ao longo de todo o ano e ao longo do dia, demonstrando atividade de locomoção e predação. Não foram encontrados animais abrigados sob pedra ou madeira morta. Não foram observados locais de atividade em oposição a lugares de repouso/abrigo. Beckianum beckianum (Pfeiffer, 1846) foi a presa preferencial. A densidade populacional variou de 0,57 a 1,2 indivíduos/m2 entre as campanhas de coleta. A espécie desloca-se, em média, 26,57 17,07 cm/24h, na Trilha da Parnaioca, Ilha Grande. A área de vida de H. contusulus é pequena, sendo de, no máximo, 0,48 m2 em três dias e 3,64 m2 em 79 dias. O deslocamento da espécie variou ao longo do ano, mas esta variação não é afetada pelas variáveis ecológicas estudadas. Este é, portanto, um comportamento plástico em H. contusulus e, provavelmente, controlado por fatores endógenos.
Resumo:
Os ungulados viventes (Cetartiodactyla e Perissodactyla), nas regiões estudadas, são representados por 11 gêneros e 24 espécies. O presente estudo propõe reconhecer os padrões de distribuição destas espécies, a partir da aplicação do método pan-biogeográfico de análise de traços. Este método auxilia no entendimento a priori dos padrões congruentes de distribuição e numa compreensão de padrões e processos de diferenciação geográfica no tempo e no espaço, reconstruindo a biogeografia de táxons. Em relação a aspectos conservacionistas, o método foi aplicado na identificação de áreas prioritárias para conservação. A aplicação do método consiste basicamente na marcação das localidades de ocorrência dos diferentes táxons em mapas, sendo estas localidades conectadas por intermédio de linhas seguindo um critério de mínima distância, resultando nos chamados traços individuais que foram plotados nos mapas de biomas da América Central e do Sul do programa ArcView GIS 3.2. A superposição destes traços individuais define um traço generalizado, sugerindo uma história comum, ou seja, a preexistência de uma biota ancestral subsequentemente fragmentada por eventos vicariantes. A interseção de dois ou mais traços generalizados corresponde a um nó biogeográfico, que representa áreas compostas e complexas, nas quais se agrupam distintas histórias biogeográficas. Para a análise pan-biogeográfica foi utilizado o software ArcView GIS 3.2 e a extensão Trazos 2004. A partir da superposição dos 24 traços individuais, foram reconhecidos cinco traços generalizados (TGs): TG1, Mesoamericano/Chocó, composto por Mazama pandora, M. temama e Tapirus bairdii; TG2, Andes do Norte (Mazama rufina, Pudu mephistophiles e Tapirus pinchaque); TG 3, Andes Centrais (Hippocamelus antisensis, Lama guanicoe, Mazama chunyi e Vicugna vicugna) ; TG4, Patagônia chilena (Hippocamelus bisulcus e Pudu puda).; TG5, Chaco/Centro oeste do Brasil (Blastocerus dichotomus, Catagonus wagneri e Ozotocerus bezoarticus); e um nó biogeográfico em Antioquia no noroeste da Colômbia. As espécies Mazama americana, M.bricenii, M.goazoubira, M.nana, Tapirus terrestris, Tayassu pecari e T. tajacu não participaram de nenhum dos traços generalizados. Os padrões de distribuição formados a partir dos traços generalizados indicam que os ungulados viventes sofreram uma fragmentação e diferenciação no Pleistoceno, relacionadas a eventos históricos ocorridos na região Neotropical, na Zona de Transição Sul-americana e na região Andina, explicados pelos movimentos ocorridos nas Zonas de Falhas Tectônicas da América Central e do Sul, por vulcanismo e pelas mudanças climáticas. A formação do platô Altiplano-Puna revelou ser uma barreira geográfica, tanto em tempos pretéritos como em tempos atuais, para a maioria da biota sul-americana, com exceção dos camelídeos, que habitam estas áreas da Argentina, do oeste da Bolívia e sudoeste do Peru. O nó biogeográfico confirmou a presença de componentes bióticos de diferentes origens, constituindo uma área com grande diversidade biológica e endêmica, sugerindo assim uma unidade de conservação no noroeste da América do Sul.
Resumo:
Based on biomimetic pattern recognition theory, we proposed a novel speaker-independent continuous speech keyword-spotting algorithm. Without endpoint detection and division, we can get the minimum distance curve between continuous speech samples and every keyword-training net through the dynamic searching to the feature-extracted continuous speech. Then we can count the number of the keywords by investigating the vale-value and the numbers of the vales in the curve. Experiments of small vocabulary continuous speech with various speaking rate have got good recognition results and proved the validity of the algorithm.
Resumo:
针对Bzier曲线间最近距离计算问题,提出一种简捷、可靠的计算方法.该方法以Bernstein多项式算术运算为工具,建立Bzier曲线间最近距离的计算模型;然后充分利用Bzier曲面的凸包性质和de Casteljau分割算法进行求解.该方法几何意义明确,能有效地避免迭代初始值的选择和非线性方程组的求解,并可进一步推广应用于计算Bzier曲线/曲面间的最近距离.实验结果表明,该方法简捷、可靠且容易实现,与Newton-Raphson方法的融合可进一步提高该方法的运行速度.
Resumo:
The past two decades have witnessed an unprecedented growth of interest in the palaeoenvironmental significance of the Pleistocene loess deposits in northern China. However, it is only several years ago that the Tertiary red clay sequence underlying Pleistocene loess attracted much attention. One of the major advances in recent studies of eolian deposits on the Loess Plateau is the verification of the eolian origin for the Tertiary red clay sediments. The evidence of the eolian origin for the red clay is mainly from geochemical and sedimentological studies. However, sedimentological studies of the red clay deposits are still few compared with those of the overlying loess sediments. To date, the red clay sections located near Xifeng, Baoji, Lantian, Jiaxian, and Lingtai have been studied, with an emphasis on magnetostratigraphy. These sections have a basal age ranging from ~4.3 Ma to ~7.0 Ma. The thickness of the sections varies significantly, depending perhaps on the development of local geomorphological conditions and the drainage system. Although the stratigraphy of the red clay sections has been recorded in some detail, correlation of the red clay sequences has not yet been undertaken. Geological records (Sun J. et al., 1998) have shown that during glacial periods of the Quaternary the deserts in northem China were greatly expanded compared with modern desert distribution. During interglacial periods, desert areas contracted and retreated mostly to northwestern China because of the increase in inland penetration of monsoonal precipitation. According to pedogenic characteristics of the red clay deposits, the climatic conditions of the Loess Plateau is warmer and wetter generally in the Neogene than in the late Pleistocene. Panicle analyses show that grain size distribution of the red clay sequence is similar to that of the paleosols in the Pleistocene loess record, thus implying a relatively remote provenance of the red clay materials. However, the quantitative or semiquantitative estimates of the distance from the source region to the Loess Plateau during the red clay development remains to be investigated. In this study, magnetostratigraphic and sedimentological studies are conducted at two thick red clay sequences-Jingchuan and Lingtai section. The objectives of these studies are focused on further sedimentological evidence for the eolian origin of the red clay, correlation of red clay sequences, provenance of the red clay, and the palaeoclimate reconstruction in the Neogene. Paleomagnetic studies show that the Jingchuan red clay has a basal age of 8.0 Ma, which is 1 million years older than the previously studied Lingtai section. The Lingtai red clay sequence was divided into five units on the basis of pedogenica characteristics (Ding et al., 1999a). The Jingchuan red clay sequence, however, can be lithologically divided into six units according to field observations. The upper five units of the Jingchuan red clay can generally correlate well with the five units of the Lingtai red clay. Comparison of magnetic susceptibility and color reflectance records of four red clay sections suggests that the Lingtai red clay sequence can be the type-section of the Neogene red clay deposits in northern China. Pleistocene loess and modem dust deposits have a unimodal grain-size distribution. The red clay sediments at Jingchuan and Lingtai also have a unimodal grain-size distribution especially similar to the paleosols in the Pleistocene loess record. Sedimentological studies of a north-south transect of loess deposits above S2 on the Loess Plateau show that loess deposits had distinct temporal and spatial sedimentary differentiation. The characteristics of such sedimentary differentiation can be well presented in a triangular diagram of normalized median grain size, normalized skewness, and normalized kurtosis. The triangular diagrams of the red clay-loess sequence at Lingtai and Jingchuan indicate that loess-paleosol-red clay may be transported and sorted by the same agent wind, thus extending the eolian record in the Loess Plateau from 2.6 Ma back to about 8.0 Ma. It has been recognized that during the last glacial maximum (LGM) the deserts in northern China had a distribution similar to the present, whereas during the Holocene Optimum the deserts retreated to the area west of the Helan Mountains. Advance-retreat cycles of the deserts will lead to changes in the distance of the Loess Plateau to the dust source regions, thereby controlling changes in grain size of the loess deposited in a specific site. To observe spatial changes in sedimentological characteristics of loess during the last glacial-interglacial cycle, the texture of loess was measured along the north-south transect of the Loess Plateau. Since the southern margin of the Mu Us desert during the LGM is already known, several models of grain size parameters versus the minimum distance from the source region to depositional areas were developed. According to these semiquantitative models, the minimum distance from the source region to Lingtai and Jingchuan areas is about 600 km during the Neogene. Therefore the estimated provenance of the Tertiary red clay deposits is the areas now occupied by the Badain Jaran desert and arid regions west of it. The ratio of the free iron to total iron concentration attests to being a good proxy indicator for the summer monsoon evolution. The Lingtai Fe_20_3 ratio record shows high values over three time intervals: 4.8-4.1 Ma, 3.4-2.6 Ma, and during the interglacial periods of the past 0.5 Ma. The increase in summer monsoon intensity over the three intervals also coincides with the well-developed soil characteristics. It is therefore concluded that the East-Asia summer monsoon has experienced a non-linear evolution since the late Miocene. In general, the East Asia summer monsoon was stronger in Neogene than in Quaternary and the strongest East Asia summer monsoon may occur between 4.1 and 4.8 Ma. The relatively small ice volume and high global temperature may be responsible for the strong summer monsoon during the early Pliocene.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Movements of wide-ranging top predators can now be studied effectively using satellite and archival telemetry. However, the motivations underlying movements remain difficult to determine because trajectories are seldom related to key biological gradients, such as changing prey distributions. Here, we use a dynamic prey landscape of zooplankton biomass in the north-east Atlantic Ocean to examine active habitat selection in the plankton-feeding basking shark Cetorhinus maximus. The relative success of shark searches across this landscape was examined by comparing prey biomass encountered by sharks with encounters by random-walk simulations of ‘model’ sharks. Movements of transmitter-tagged sharks monitored for 964 days (16754km estimated minimum distance) were concentrated on the European continental shelf in areas characterized by high seasonal productivity and complex prey distributions. We show movements by adult and sub-adult sharks yielded consistently higher prey encounter rates than 90% of random-walk simulations. Behavioural patterns were consistent with basking sharks using search tactics structured across multiple scales to exploit the richest prey areas available in preferred habitats. Simple behavioural rules based on learned responses to previously encountered prey distributions may explain the high performances. This study highlights how dynamic prey landscapes enable active habitat selection in large predators to be investigated from a trophic perspective, an approach that may inform conservation by identifying critical habitat of vulnerable species.
Resumo:
Objective assessment of animal personality is typically time consuming, requiring the repeated measure of behavioural responses. By contrast, subjective assessment of personality allows information to be collected quickly by experienced caregivers. However, subjective assessment must predict behaviour to be valid. Comparisons of subjective assessments and behaviour have been made but often with methodological weaknesses and thus, limited success. Here we test the validity of a subjective assessment against a battery of behaviour tests in 146 horses (Equus caballus). Our first aim was to determine if subjective personality assessment could predict behaviour during behaviour testing. We made specific a priori predictions for how subjectively measured personality should relate to behaviour testing. We found that Extroversion predicted time to complete a handling test and refusal behaviour during this test. It also predicted minimum distance to a novel object. Neuroticism predicted how reactive an individual was to a sudden visual stimulus but not how quickly it recovered from this. Agreeableness did not predict any behaviour during testing. There were several unpredicted correlations between subjective measures and behaviour tests which we explore further. Our second aim was to combine data from the subjective assessment and behaviour tests to gain a more comprehensive understanding of personality. We found that the combination of methods provides new insights into horse behaviour. Furthermore, our data are consistent with the idea of horses showing different coping styles, a novel finding for this species. © 2013 Elsevier B.V.
Resumo:
The title compound, [CdCl2(C6H7N3O)(2)], was obtained unintentionally as a product of an attempted reaction of CdCl2 center dot 2.5H(2)O and picolinic acid hydrazide, in order to obtain a cadmium(II) complex analogous to a 15-metallacrown-5 complex of the formula [MCu5L5]X-n, with M = a central metal ion, L = picolinic acid hydrazide and X = Cl- , but with cadmium the only metal present. The coordination geometry around the Cd-II atom can be considered as distorted octahedral, with two bidentate picolinic acid hydrazide ligands, each coordinating through their carbonyl O atom and amino N atom, and two chloride anions. In the crystal structure, intermolecular N-H center dot center dot center dot Cl and N-H center dot center dot center dot N hydrogen bonds link the molecules into a two-dimensional network parallel to the ( 100) plane. The pyridine rings of adjacent networks are involved in pi-pi stacking interactions, the minimum distance between the ring centroids being 3.693 (2) angstrom.