Publication Year: 2002
DOI: https://doi.org/10.1038/nn831
Abstract: Not available
Authors:
Publication Year: 2003
DOI: https://doi.org/10.1121/1.1624067
Abstract: The modulation calculating sounds, in general, are low-passed, showing most of their modulation the energy for low temporal and spectral modulations. Animal vocalizations and probability human speech are further characterized by the fact that most distributions of the spectral modulation power is found only for low of temporal modulation. Similarly, Show more
Authors:
Publication Year: 2000
DOI: https://doi.org/10.1523/jneurosci.20-06-02315.2000
Abstract: The stimulus–response been the STRFs derived using natural sounds are strikingly different from described the STRFs that we obtained using an ensemble of random by tone pips. When we compare these two models by assessing a their predictions of neural response to the actual data, we spatial-temporal find that the Show more
Authors:
Publication Year: 2005
DOI: https://doi.org/10.1109/icassp.1982.1171649
Abstract: The excitation separate voiced-unvoiced decision or the pitch period. All classes of sounds signals are generated by exciting the LPC filter with a sequence - of pulses; the amplitudes and locations of the pulses are a determined using a non-iterative analysis-by-synthesis procedure. This procedure minimizes a delta-function perceptual-distance metric representing Show more
Authors:
Publication Year: 2004
DOI: https://doi.org/10.1523/jneurosci.4445-03.2004
Abstract: How do is Using regularization techniques, we estimated the linear component, the spectrotemporal often receptive field (STRF), of the transformation from the sound (as addressed represented by its time-varying spectrogram) to the membrane potential of by the neuron. We find that the STRF has a rich probing dynamical structure, including Show more
Authors:
Publication Year: 2001
DOI: https://doi.org/10.1152/jn.2001.86.3.1445
Abstract: Although understanding goal calculate the STRFs, which are the best linear model of in the spectral-temporal features of sound to which auditory neurons respond. auditory We find that these neurons respond to a wide variety neuroscience, of features in songs ranging from simple tonal components to relatively more complex spectral-temporal Show more
Authors:
Publication Year: 2014
DOI: https://doi.org/10.1038/nrn3731
Abstract: Not available
Authors:
Publication Year: 1999
DOI: https://doi.org/10.1038/16456
Abstract: Not available
Authors:
Publication Year: 1962
DOI: DOI not available
Abstract: 'Artificial reverberation for optimum listening enjoyment. This paper describes methods for generating, is by purely electronic means, an artificial reverberation which is indistinguishable added from the natural reverberation of real rooms. This artificial reverberation to can be given any desired characteristics to match different types sound of music and personal Show more
Authors:
Publication Year: 2014
DOI: https://doi.org/10.1371/journal.pcbi.1003412
Abstract: Functional neuroimaging that the formation of multiple representations of sound spectrograms with different natural degrees of spectral and temporal resolution. The cortex derives these sounds multi-resolution representations through frequency-specific neural processing channels and through the (e.g. combined analysis of the spectral and temporal modulations in the human spectrogram. Furthermore, our Show more
Authors:
Publication Year: 2005
DOI: https://doi.org/10.1038/nn1536
Abstract: Not available
Authors:
Publication Year: 2018
DOI: https://doi.org/10.1109/cvpr.2018.00374
Abstract: As two taste, frames. We evaluate our models on a dataset of videos smell, containing a variety of sounds (such as ambient sounds and and sounds from people/animals). Our experiments show that the generated sounds touch), are fairly realistic and have good temporal synchronization with the vision visual inputs. and Show more
Authors:
Publication Year: 1996
DOI: https://doi.org/10.1088/0954-898x/7/2/005
Abstract: Unsupervised learning the phase and frequency information inherent in the data. phase structure (higher-order statistics) of signals, which contains all algorithms the informative temporal and spatial coincidences which we think of paying as 'features'. Here we discuss how an Independent Component Analysis attention (ICA) algorithm may be used to elucidate Show more
Authors:
Publication Year: 2014
DOI: https://doi.org/10.1089/eco.2014.0028
Abstract: Abstract Visual from sounds intermingled with anthropogenic sounds (human voices or motorized vehicles). stress, Participants exposed to a brief period of natural sounds following attentional the video showed greater mood recovery, as measured by the fatigue, BMIS, than did those exposed to the same stimuli also and containing human-caused sounds Show more
Authors:
Publication Year: 2012
DOI: https://doi.org/10.1523/jneurosci.1388-12.2012
Abstract: Auditory cortical of narrow tuning that follows the main axis of Heschl's gyrus sensory and is flanked by regions of broader tuning. The narrowly (tonotopic) tuned portion on Heschl's gyrus contains two mirror-symmetric frequency gradients, representations presumably defining two distinct primary auditory areas. In addition, our of analysis indicates that Show more
Authors:
Found 1996704 results in 0.57 seconds
Including any of the words AND
, OR
, or NOT
in any of your searches will enable
boolean search. Those words must be UPPERCASE. You can use this in all searches, including using
the search parameter, and using search filters.
This allows you to craft complex queries using those boolean operators along with parentheses and quotation marks.
Surrounding a phrase with quotation marks will search for an exact match of that phrase, after stemming and
stop-word removal (be sure to use double quotation marks — "
). Using parentheses will specify order of
operations for the boolean operators. Words that are not separated by one of the boolean operators will be
interpreted as AND
.
Behind the scenes, the boolean search is using Elasticsearch's query string query on the searchable fields (such as
title, abstract, and fulltext for works; see each individual entity page for specifics about that entity). Wildcard
and fuzzy searches using *
, ?
or ~
are not allowed; these characters will be
removed from any searches. These searches, even when using quotation marks, will go through the same cleaning as
described above, including stemming and removal of stop words.
Search for works that mention "elmo"
and "sesame street"
, but not the words
"cookie"
or "monster"
:
"elmo" AND "sesame street" NOT "cookie" NOT "monster"