Put in Google: IEEE Fake Conference

Put in Google: IEEE Fake Conference (You will find more than 30 fake conferences of IEEE)

Tuesday, 21 January 2014

A Predatory Librarian: Jeffrey Beall: The crook, the felon, the criminal of the Academic Community.

Prof. Nicola Bellomo says (you can find this Letter to many places on the web)
I recently made an inquiry to Jeffrey Beall (the Denver, USA librarian who runs a webpage where he slanders and insults about 500 publishing houses), whether he, Jeffrey Beall himself, has the ability to solve the simple math equation
 5x+3 = 0.

Jeffrey Beall replied to my first email, that he has never studied even the simplest form of Math. Meaning that he doesn’t know what “equation” means (he has never even seen equations like 5x+3 = 0, 3x*x + 7x -4 =0 etc), neither does he know what “Derivative” or “Integral” mean.

Jeffrey Beall told me that he has a Bachelor in Spanish and English language. This of course didn’t stop him blacklisting hundreds of houses that publish Math, Physics, Computer Science, Engineering, Economics, Biology, Chemistry, Earth Sciences, Space Science etc Journals. That from a man who isn’t even able to solve the simple equation 5x+3 = 0, and who doesn’t know what Derivative or Integral mean.

Recently, Jeffrey Beall included in his “black list” an old, big Academic Publishing House, with several, historic Journals in Math, Physics, Computer Science, Engineering, Economics (some of which have been indexed in ISI and SCOPUS), and that because, according to Jeffrey Beal, they had copied the… Maxwell Equations from a 2007 article.

Obviously, since Jeffrey Beall doesn’t know how to solve the equation 5x+3 = 0, and since he doesn’t know what Derivative and Integral mean, he has zero knowledge when it comes to Electricity or Physics and has never seen the Maxwell Equations (not even in their most basic form).

As expected from somebody who is entirely clueless regarding even elementary Math and Physics, he considered the Maxwell Equations found in the Journal to be plagiarized… from a 2007 paper.

With a Bachelor in Spanish and English in his CV, Jeffrey Beall passes judgment even to Medicine, Biology, Chemistry etc Journals and articles, while he is fully aware that he’s never attended a University course on which nucleotides make up the DNA molecule, he’s never heard what enzyme, catalysis, proteins etc are, and if one asks him what pH is, he’ll be completely ignorant.

However, in his bizarre blog, this person has declared himself a critic of everyone and everything. He blacklists publishing houses (many of which having journals and conferences indexed in ISI, SCOPUS, Compendex, ACM etc), he includes stand-alone journals in “black lists”, slanders Editors-in-Chief, Authors etc. Of course he does all that selectively, following a certain logic of his, which will be analyzed below.
In a later email that I sent him, I asked him to comment on why he includes a small publishing house in his black list because “they copied Maxwell’s Equations from a 2007 paper” (poor Jeffrey Beall doesn’t know that Maxwell’s Equations are taught in Universities’ first year elementary physics), while at the same time he excludes IEEE, who have over 85 SCIgen machine-generated fake conference papers published and indexed.

(See: A 2013 scientometrics paper demonstrated that at least 85 SCIgen machine-generated papers have been published by IEEE. The Paper has been published in Springer Verlag: http://link.springer.com/article/10.1007%2Fs11192-012-0781-y
Download the full paper from:
He also didn’t respond to the question why he didn’t include Elsevier in his black list, who were revealed to have been publishing 6 Medical Journals between 2000 and 2005 with fake articles and studies, that were funded by pharmaceutical companies, in order to scientifically prove that their products were superior to their competitors’. See http://en.wikipedia.org/wiki/Elsevier
or

In a third email I asked him where his moral and academic responsibility stands, since if due to him including some publishing houses in black lists, those houses reduce or cease their activity (due to his immoral slandering), hundreds of jobs will be lost and families will end up in the street. Naturally, despite my repeated emails, Jeffrey Beall never replied.

There are also rumors on the internet that some publishing houses, like Hindawi and Elsevier, pay Jeffrey Beall on a yearly basis in order not to be included in his black list. This looks like heavy taxing that the publisher is asked to pay annually to Jeffrey Beall, and, as we’ll see below, part of this tax ends up in the Denver University funds.

Actually, Hindawi was in Jeffrey Beall’s black list a year ago. Then, after negotiations, Jeffrey Beall placed them in a watching list (i.e. an “under observation” list), and eventually completely removed them.

Just like Jeffrey Beall himself mentioned in his blog, Hindawi’s people visited him in Denver and offered him “explanations”. After that, Jeffrey Beall gradually removed Hindawi from his black list.
Why, Mr. Jeffrey Beall, did you agree to meet with Hindawi’s representatives in your office in Denver, when Hindawi was black listed? What did you talk about, Mr. Jeffrey Beall? Hindawi, as mentioned on their website, has an annual turnover of $6 million.

Couldn’t they use part of that money to pay off Jeffrey Beall?

Furthermore, in his blog, Jeffrey Beall has posted a photo of Hindawi’s headquarters, which he calls “House of Spam”. So, Mr. Jeffrey Beall, why isn’t Hindawi in your black list, when among your fundamental black listing reasons, like you mention in your blog, is spam?

Having read all that, you can draw your own conclusions on who Jeffrey Beall is and what his real motives behind his publishing house and scientific organization black listing blog are. Houses and Organizations that Jeffrey Beall calls “Predatory Publishers”.

Maybe it’s time to talk about Predatory Librarians, Mr. Jeffrey Beall. About librarians who target Open Access Journals, especially because the open, online PDF policy deprives librarians (like Jeffrey Beall) from the possibility of receiving kickbacks from publishing houses.

To those who are not aware, it is known that several publishing houses paid- and pay-off librarians (like Jeffrey Beall), in order to get their libraries to subscribe to those houses.

Meaning that, in order for a certain University, Research Center, Company to buy some books or subscribe to some journals, it is common knowledge that librarians receive money under the table from the respective publishing houses. It is therefore natural and understandable for this kind of librarians (Jeffrey Beall, for instance) to fight Open Access Journals and Open Access Publishing Houses, since they 
a) lose their kickbacks, 
b) lose their power and influence in the library, as well as the University.

I’ve saved all my email exchange with Jeffrey Beall, along with their headers/source code, and I will soon upload them to various websites. I need everyone’s help though, by sending me emails (to the email address found at the bottom) and exchanging information on Jeffrey Beall’s scandalous behavior.

And one last question to Jeffrey Beall: How can a librarian WITHOUT a Ph.D. be an Assistant Professor at the University of Denver, Mr. Jeffrey Beall?

Could it be that Jeffrey Beall bribed older professors, using the abundance of money that he is said to possess? 

Could it be that Jeffrey Beall threatened that if they don’t vote for him, he’ll include all journals where they have papers published in his black list, and slander them on the internet?

Or is it that they were so much impressed by his research? Actually, Mr. Jeffrey Beall, what is your scientific research? Your scientific research as a “real scientist” that is, Mr. Jeffrey Beall. What publications do you have, besides slandering, insulting and discredit hundreds of scientific organizations and publishing houses? What do you teach at the University of Denver Mr. Jeffrey Beall?

Is there really any course (real scientific course) that you can teach, Mr. Jeffrey Beall, besides calling publishing houses and scientific organizations “predatory”?

 It doesn’t look like it, Mr. Jeffrey Beall. No matter how hard we looked, we didn’t find any courses taught by you at the University of Denver. 

Neither on your personal webpage, Mr. Jeffrey Beall, nor on your money-making blog, nor even on the University of Denver website is there any mention about courses taught by you.

So, since you do absolutely no scientific research, and you don’t even teach pre-graduate or post-graduate students, what is your role at the University of Denver, Mr. Jeffrey Beall?

Does the University of Denver pay you a salary, Mr. Jeffrey Beall, or do you pay the University to let you bear the title of Assistant Professor? 

A title that you really do not deserve, as you have no Ph.D., no actual research work and do no teaching whatsoever. It is a shame for the University of Denver to have professors like you, Jeffrey Beall.

Or is running a blog that slanders everyone and everything considered scientific research? 

It most certainly is not, Mr. Jeffrey Beall.

Could it be, however, an applied money-making project for you and your university, Mr. Jeffrey Beall?

 (By the way, why should a small publishing house from some place in India, which cannot attract papers, nor editorial board members, from western universities, be in your black list Mr. Jeffrey Beall? In this case, you should also black list all non-US and non-European universities. Of course there exist first-rate universities, like Harvard, MIT, Berkeley, Cambridge. Should all other universities be in a black list? Is this your logic “Professor” Beall? Furthermore, you condemn any new publishing house, as it is natural for them to not have papers and not be indexed as soon as they launch, but has to deal with you, who, like a vulture, immediately includes them in your black list for those reasons.)

I would greatly appreciate your response, Mr. Jeffrey Beal. And I would also appreciate feedback from anyone who agrees with me. 

My aim is to create a network of true scientists and expose “Professor”, “Academic Teacher” and, above all, “Researcher” Jeffrey Beall (this science jack-of-all-trades, who doesn’t know a first-degree algebraic equation, derivatives, integrals, elementary Physics and Chemistry laws, etc)
Thank you






Wednesday, 2 October 2013

Fake Conferences OMICS: Fake Conferences OMICS do not do anything else than to send SPAM and SPAM and SPAM to us. These academic criminals have this type of SPAM.

Fake Conferences OMICS do not do anything else than to send SPAM and SPAM and SPAM to us. These academic criminals have this type of SPAM. Isolate these academic sharks now and put the Fake Conferences OMICS in your black-list



Dear Author
We would like to invite you to contribute a Manuscript for publication in the upcoming issue. 

We have chosen selective scientists who have contributed excellent work, to help us release best quality articles for the upcoming issue. Thus I kindly request you to contribute any kind of article (Research, Review, Short Commentaries, Case reports, Mini Review, etc.)
Note: We will extend the date of submission as per your convenience.
Articles of Thermodynamics & Catalysis will be indexed in EBSCO, CAS, Hinari, Google Scholar, Pro Quest, Sherpa-Romeo, Open J-gate, GALE, Journal Seek and PubMed (Only NIH Portfolio).  

You can find more details at 
http://www.omicsonline.org/InstructionsForAuthorsJTC.php  We will plan to publish your paper, along with others that we receive on the issue of the Journal.

We request you to respond to this mail within 48 hrs. 

With thanks & regards,

Rakesh M
Editorial Assistant
Thermodynamics and Catalysis
731 Gull Ave, Foster City
CA 94404, USA
Phone: +1-650-268-9744
Fax: +1-650-618-1414

Saturday, 12 January 2013

Hapless victims by egregious spam from IEEE

All people in our department received today 5-6 copies in their mail boxes from IEEE. Maybe, the advertized journal is prestigious, but to send us spam, looks like to lack academic credibility. Refrain to send papers to those spammers. See also http://netdriver.blogspot.com/2012/07/fake-ieee-conferences-at.html


IEEE JOURNAL OF EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS (JETCAS)

-       MASSOUD PEDRAM, Editor-in-Chief (EiC)
-       MANUEL DELGADO-RESTITUTO, Deputy EiC


Volume 2, Issue 3
http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6374660

------------------------------------------------------------
-----------------------------------------

SPECIAL ISSUE ON CIRCUITS, SYSTEMS, AND ALGORITHMS FOR COMPRESSIVE SENSING
Compressed sensing (CS) is a new paradigm for the acquisition/ sampling of signals that violates the intuition behind the theorem of Shannon. In fact, CS theory states that, under surprisingly broad conditions it is possible to reconstruct certain signals or images using far fewer samples or measurements than they are used with traditional methods. To enable this, compressive sensing is based on two concept/principles: 1) sparsity, which is related to the signals of interest, and 2) incoherence, which relates to the methods of measurement/acquisition/sampling.
This issue on Circuits, Systems, and Algorithms for Compressive Sensing presents both results in the exploitation of CS techniques in signals and image processing and, for the first time, by offering to the reader a comprehensive collections of contribution dealing with the design of implementation of circuits and systems exploiting compressive sensing techniques and present them in a systematic way.


Allstot, D.; Rovatti, R.; Setti, G.., Guest Editorial
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6363482

------------------------------------------------------------------------------------------------------------------

Coluccia, G.; Kuiteing, S. K.; Abrardo, A.; Barni, M.; Magli, E., Progressive Compressed Sensing and Reconstruction of Multidimensional Signals Using Hybrid Transform/Prediction Sparsity Model
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6331018
Compressed sensing (CS) is an innovative technique allowing to represent signals through a small number of their linear projections. Hence, CS can be thought of as a natural candidate for acquisition of multidimensional signals, as the amount of data acquired and processed by conventional sensors could create problems in terms of computational complexity. In this paper, we propose a framework for the acquisition and reconstruction of multidimensional correlated signals. The approach is general and can be applied to D dimensional signals, even if the algorithms we propose to practically implement such architectures apply to 2-D and 3-D signals. The proposed architectures employ iterative local signal reconstruction based on a hybrid transform/prediction correlation model, coupled with a proper initialization strategy.

-----------------------------------------------------------------------------------------------------------------

Shishkin, S. L., Fast and Robust Compressive Sensing Method Using Mixed Hadamard Sensing Matrix
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6311439
The paper presents a novel class of sensing matrix that provides great speed-up of virtually any compressed sensing (CS) algorithm. It combines separable structure and maximal incoherence with any fixed basis. The former enables fast matrix-vector computation which is the most computationally expensive part of most CS algorithms; the latter guarantees a good restricted isometry property bound and high quality of CS recovery. Even greater speed-up is achieved by using Hadamard or Fourier matrixes in the construction. The construction of the sensing matrix is incorporated in a Split Bregman method of total variation minimization. The resulting algorithm is not only much faster than any published CS method; it also demonstrates high quality CS recovery of images with the number of measurements as low as 5% of the number of pixels, in the presence of high measurement noise (up to 20% of measurement standard deviation).

-----------------------------------------------------------------------------------------------------------------

Majumdar, A.; Ward, R. K.; Aboulnasr, T., Algorithms to Approximately Solve NP Hard Row-Sparse MMV Recovery Problem: Application to Compressive Color Imaging
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6365267
This paper addresses the row-sparse multiple measurement vector (MMV) recovery problem. This requires solving a nondeterministic polynomial (NP) hard optimization. Instead of approximating the NP hard problem by its convex/nonconvex surrogates as is done in other studies, we propose techniques to directly solve the NP hard problem approximately with tractable algorithms. The algorithms derived in here yields better recovery rates than the state-of-the-art convex (spectral projected gradient) algorithm we compared against. We show that the compressive color image reconstruction can be formulated as an MMV recovery problem with sparse rows and therefore can be solved by our proposed method. The reconstructed images are more accurate (improvement about 2 dB in peak signal-to-noise ratio) than the previous technique compared against.

-----------------------------------------------------------------------------------------------------------------

Bilen, C.; Wang, Y.; Selesnick, I. W., High-Speed Compressed Sensing Reconstruction in Dynamic Parallel MRI Using Augmented Lagrangian and Parallel Processing
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6313929
Magnetic resonance imaging (MRI) is one of the fields that the compressed sensing theory is well utilized to reduce the scan time significantly leading to faster imaging or higher resolution images. It has been shown that a small fraction of the overall measurements are sufficient to reconstruct images with the combination of compressed sensing and parallel imaging. Various reconstruction algorithms have been proposed for compressed sensing, among which augmented Lagrangian based methods have been shown to often perform better than others for many different applications. In this paper, we propose new augmented Lagrangian based solutions to the compressed sensing reconstruction problem with analysis and synthesis prior formulations. We also propose a computational method which makes use of properties of the sampling pattern and the singular value decomposition of the system transfer function to significantly improve the speed of the reconstruction for the proposed algorithms !
 in Cartesian sampled MRI. The proposed algorithms are shown to outperform earlier methods especially for the case of dynamic MRI for which the transfer function tends to be a very large matrix and significantly ill conditioned. It is also demonstrated that the proposed algorithm can be accelerated much further than other methods in case of a parallel implementation with graphics processing units.

-----------------------------------------------------------------------------------------------------------------

Zhang, J.; Zhao, D.; Zhao, C.; Xiong, R.; Ma, S.; Gao, W., Image Compressive Sensing Recovery via Collaborative Sparsity
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6341094
Compressive sensing (CS) has drawn quite an amount of attention as a joint sampling and compression approach. Its theory shows that when the signal is sparse enough in some domain, it can be decoded from many fewer measurements than suggested by the Nyquist sampling theory. So one of the most challenging researches in CS is to seek a domain where a signal can exhibit a high degree of sparsity and hence be recovered faithfully. Most of the conventional CS recovery approaches, however, exploited a set of fixed bases (e.g., DCT, wavelet, and gradient domain) for the entirety of a signal, which are irrespective of the nonstationarity of natural signals and cannot achieve high enough degree of sparsity, thus resulting in poor rate-distortion performance. In this paper, we propose a new framework for image compressive sensing recovery via collaborative sparsity, which enforces local 2-D sparsity and nonlocal 3-D sparsity simultaneously in an adaptive hybrid space-transform domain,!
  thus substantially utilizing intrinsic sparsity of natural images and greatly confining the CS solution space. In addition, an efficient augmented Lagrangian-based technique is developed to solve the above optimization problem. Experimental results on a wide range of natural images are presented to demonstrate the efficacy of the new CS recovery strategy.

-----------------------------------------------------------------------------------------------------------------

Hyder, Md. D.; Mahata, K., Maximum a Posteriori Based Approach for Target Detection in MTI Radar
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6317202
We propose a sparse recovery approach to detect moving targets in clutter. In presence of clutter, the target space is not sparse. We propose a simple way to estimate the clutter region. We then enforce sparsity by modeling the clutter as a single extended cluster of nonzero components. This done by solving a sparse signal recovery problem with partially known support within a maximum a posteriori estimation framework. The resulting algorithm is applied in angle-Doppler imaging for moving target indication in an airborne radar. Our approach has a number of advantages including improved robustness to noise and increased resolution with limited data.

-----------------------------------------------------------------------------------------------------------------

Barbotin, Y.; Vetterli, M., Fast and Robust Parametric Estimation of Jointly Sparse Channels
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6310074
We consider the joint estimation of multipath channels obtained with a set of receiving antennas and uniformly probed in the frequency domain. This scenario fits most of the modern outdoor communication protocols for mobile access (ETSI Std. 125 913) or digital broadcasting (ETSI Std. 300 744) among others. Such channels verify a sparse common support (SCS) property which was used in the work of Barbotin (2012) to propose a finite rate of innovation (FRI)-based sampling and estimation algorithm. In this paper, we improve the robustness and computational complexity aspects of this algorithm. The method is based on projection in Krylov subspaces to improve complexity and a new criterion called the partial effective rank (PER) to estimate the level of sparsity to gain robustness. If P antennas measure a K-multipath channel with N uniformly sampled measurements per channel, the algorithm possesses an {cal O}(KPNlog N) complexity and an {cal O}(KPN) memory footprint instead of {c!
 al O}(PN3) and {cal O}(PN2) for the direct implementation, making it suitable for K << N. The sparsity is estimated online based on the PER, and the algorithm therefore has a sense of introspection being able to relinquish sparsity if it is lacking. The estimation performances are tested on field measurements with synthetic additive white Gaussian noise, and the proposed algorithm outperforms nonsparse reconstruction in the medium to low- signal-to-noise ratio range (<= 0 dB), increasing the rate of successful symbol decoding by 1/10 in average, and 1/3 in the best case. The experiments also show that the algorithm does not perform worse than a nonsparse estimation algorithm in nonsparse operating conditions, since it may fall-back to it if the PER criterion does not detect a sufficient level of sparsity. The algorithm is also tested against a method assuming a "discrete" sparsity model as in compressed sensing. The conducted test indicates a trade-off between speed and acc!
 uracy.

-----------------------------------------------------------------------------------------------------------------

Vlachos, E.; Lalos, A. S.; Berberidis, K., Stochastic Gradient Pursuit for Adaptive Equalization of Sparse Multipath Channels
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6303838
In this paper, a new heuristic algorithm for the sparse adaptive equalization problem, termed as stochastic gradient pursuit, is proposed. A decision-feedback equalization structure is used in order to effectively mitigate the effect of long multipath channels. Diverging from the commonly used approach of sparse channel identification, we exploit the sparsity of the inverse problem under the compressive sensing perspective. Also, an extension to the case where the sparsity order parameter is unknown, is developed. Simulation results verify that the proposed schemes exhibit faster convergence and improved tracking capabilities compared to conventional and other sparse aware equalization schemes, offering at the same time a reduced computational complexity.

-----------------------------------------------------------------------------------------------------------------

Ravanmehr, V.; Danjean, L.; Vasic, B.; Declercq, D., Interval-Passing Algorithm for Non-Negative Measurement Matrices: Performance and Reconstruction Analysis http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6317120
We consider the Interval-Passing Algorithm (IPA), an iterative reconstruction algorithm for reconstruction of non-negative sparse real-valued signals from noise-free measurements. We first generalize the IPA by relaxing the original constraint that the measurement matrix must be binary. The new algorithm operates on any non-negative sparse measurement matrix. We give a performance comparison of the generalized IPA with the reconstruction algorithms based on 1) linear programming and 2) verification decoding. Then we identify signals not recoverable by the IPA on a given measurement matrix, and show that these signals are related to stopping sets responsible to failures of iterative decoding algorithms on the binary erasure channel (BEC). Contrary to the results of the iterative decoding on the BEC, the smallest stopping set of a measurement matrix is not the smallest configuration on which the IPA fails. We analyze the recovery of sparse signals on subsets of stopping sets v!
 ia the IPA and provide sufficient conditions on the exact recovery of sparse signals. Reconstruction performance of the IPA using the IEEE 802.16e LDPC codes as measurement matrices are given to show the effect of stopping sets in the performance of the IPA.

-----------------------------------------------------------------------------------------------------------------

Bai, L.; Roy, S., Compressive Spectrum Sensing Using a Bandpass Sampling Architecture
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6303836
Fast and reliable detection of available channels (i.e., those temporarily unoccupied by primary users) is a fundamental problem in the context of emerging cognitive radio networks, without an adequate solution. The (mean) time to detect idle channels is governed by the front-end bandwidth to be searched for a given resolution bandwidth. Homodyne receiver architectures with a wideband radio-frequency front-end followed by suitable channelization and digital signal processing algorithms, are consistent with speedier detection, but also imply the need for very high speed analog-to-digital converters (ADCs) that are impractical and/or costly. On the other hand, traditional heterodyne receiver architectures consist of analog band-select filtering followed by down-conversion that require much lower rate ADCs, but at the expense of significant scanning operation steps that constitute a roadblock to lowering the scan duration. In summary, neither architecture provides a satisfactor!
 y solution to the goal of (near) real-time wideband spectrum sensing. In this work, we propose a new compressive spectrum sensing architecture based on the principle of under-sampling (or bandpass sampling) that provides a middle ground between the above choices, i.e., our approach requires modest ADC sampling rates and yet achieves fast spectrum scanning. Compared to other compressive spectrum sensing architectures, the proposed method does not require a high-speed Nyquist rate analog component. A performance model for the scanning duration is developed based on the mean time to detect all idle channels. Numerical results show that this scheme provides significantly faster idle channel detection than the conventional serial search scheme with a heterodyne architecture.

-----------------------------------------------------------------------------------------------------------------

Haboba, J.; Mangia, M.; Pareschi, F.; Rovatti, R.; Setti, G., A Pragmatic Look at Some Compressive Sensing Architectures With Saturation and Quantization
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6352939
The paper aims to highlight relative strengths and weaknesses of some of the recently proposed architectures for hardware implementation of analog-to-information converters based on Compressive Sensing. To do so, the most common architectures are analyzed when saturation of some building blocks is taken into account, and when measurements are subject to quantization to produce a digital stream. Furthermore, the signal reconstruction is performed by established and novel algorithms (one based on linear programming and the other based on iterative guessing of the support of the target signal), as well as their specialization to the particular architecture producing the measurements. Performance is assessed both as the probability of correct support reconstruction and as the final reconstruction error.

-----------------------------------------------------------------------------------------------------------------

Khan, O. U.; Chen, S.-Y.; Wentzloff, D. D.; Stark, W. E., Impact of Compressed Sensing With Quantization on UWB Receivers With Multipath Channel Estimation
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6354004
This paper explores the application of compressive sensing (CS) for ultra wide band (UWB) communication. Channel estimation is an important aspect for any communication system and especially for UWB systems in order to appropriately collect the energy from the multipath channel. UWB generally requires a high sampling rate since the bandwidth is large. Channel estimation using CS is studied along with its impact on reducing the sampling rate for an ADC to reduce power. Practical issues regarding the effect of quantization on channel estimation are addressed and a hardware implementation for CS based on the Walsh-Hadamard transform (WHT) allowing sub-Nyquist sampling is proposed. To separate the effect of channel estimation with CS, the performance of the sub-Nyquist ADC is studied in a noiseless and multipath free channel and design decisions are discussed. Comparison with the Nyquist ADC shows that using the sub-Nyquist ADC reduce power by a factor of about 6 times. For the !
 proposed hardware, two receiver architectures based on matched filtering and filtering in the compressed domain (so-called "smashed filtering") are studied. It is found that with a perfect channel smashed filtering performs better than matched filtering. Finally the effect of channel estimation on the proposed hardware is studied along with two different recovery algorithms namely basis pursuit and matching pursuit.

-----------------------------------------------------------------------------------------------------------------

Kim, Y.; Guo, W.; Gowreesunker, B. V.; Sun, N.; Tewfik, A. H., Multi-Channel Sparse Data Conversion With a Single Analog-to-Digital Converter
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6359799
We address the problem of performing simultaneous analog-to-digital (A/D) conversion on multi-channel signals using a single A/D converter (ADC). Assuming that each input has an unknown sparse representation in known dictionaries, we find that multi-channel information can be sampled with a single ADC. The proposed ADC architecture consists of a mixed signal block and a digital signal processing (DSP) block. The channel inputs are sampled by switched-capacitor-based sample-and-hold circuits, and then mixed using sequences of plus or minus ones, leading to no bandwidth expansion. The resulting discrete-time signals are converted to digital sequences by a single ADC or quantizer. At the DSP block, each channel is separated from the digitized mixture through various separation algorithms that are widely used in compressive sensing. For this, we study several techniques for separating the mixture of the channel inputs into the sample number of digital sequences corresponding to !
 each channel. We show that with an ideal ADC, perfect reconstruction of the signals is possible if the input signals are sufficiently sparse. We also show simulation results with a 16-bit ADC model, and the reconstruction is possible up to the accuracy of the ADCs.

-----------------------------------------------------------------------------------------------------------------

Zhou, J.; Ramirez, M.; Palermo, S.; Hoyos, S., Digital-Assisted Asynchronous Compressive Sensing Front-End
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6353598
Compressive sensing (CS) is a promising technique that enables sub-Nyquist sampling, while still guaranteeing the reliable signal recovery. However, existing mixed-signal CS front- end implementation schemes often suffer from high power consumption and nonlinearity. This paper presents a digital-assisted asynchronous compressive sensing (DACS) front-end which offers lower power and higher reconstruction performance relative to the conventional CS-based approaches. The front-end architecture leverages a continuous-time ternary encoding scheme which modulates amplitude variation to ternary timing information. Power is optimized by employing digital-assisted modules in the front-end circuit and a part-time operation strategy for high-power modules. An S-member Group-based Total Variation (S -GTV) algorithm is proposed for the sparse reconstruction of piecewise-constant signals. By including both the inter-group and intra-group total variation, the S-GTV scheme outperforms the c!
 onventional TV-based methods in terms of faster convergence rate and better sparse reconstruction performance. Analyses and simulations with a typical ECG recording system confirm that the proposed DACS front-end outperforms a conventional CS-based front-end using a random demodulator in terms of lower power consumption, higher recovery performance, and more system flexibility.

-----------------------------------------------------------------------------------------------------------------

Mamaghanian, H.; Khaled, N.; Atienza, D.; Vandergheynst, P., Design and Exploration of Low-Power Analog to Information Conversion Based on Compressed Sensing
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6332541
The long-standing analog-to-digital conversion paradigm based on Shannon/Nyquist sampling has been challenged lately, mostly in situations such as radar and communication signal processing where signal bandwidth is so large that sampling architectures constraints are simply not manageable. Compressed sensing (CS) is a new emerging signal acquisition/compression paradigm that offers a striking alternative to traditional signal acquisition. Interestingly, by merging the sampling and compression steps, CS also removes a large part of the digital architecture and might thus considerably simplify analog-to-information (A2I) conversion devices. This so-called "analog CS", where compression occurs directly in the analog sensor readout electronics prior to analog-to-digital conversion, could thus be of great importance for applications where bandwidth is moderate, but computationally complex, and power resources are severely constrained. In our previous work (Mamaghanian, 2011), we !
 quantified and validated the potential of digital CS systems for real-time and energy-efficient electrocardiogram compression on resource-constrained sensing platforms. In this paper, we review the state-of-the-art implementations of CS-based signal acquisition systems and perform a complete system-level analysis for each implementation to highlight their strengths and weaknesses regarding implementation complexity, performance and power consumption. Then, we introduce the spread spectrum random modulator pre-integrator (SRMPI), which is a new design and implementation of a CS-based A2I read-out system that uses spread spectrum techniques prior to random modulation in order to produce the low rate set of digital samples. Finally, we experimentally built an SRMPI prototype to compare it with state-of-the-art CS-based signal acquisition systems, focusing on critical system design parameters and constraints, and show that this new proposed architecture offers a compelling alte!
 rnativ- , in particular for low power and computationally-constrained
embedded systems.

-----------------------------------------------------------------------------------------------------------------

Yenduri, P. K.; Rocca, A. Z.; Rao, A. S.; Naraghi, S.; Flynn, M. P.; Gilbert, A. C., A Low-Power Compressive Sampling Time-Based Analog-to-Digital Converter
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6341093
This paper presents a low-power, time-based, compressive sampling architecture for analog-to-digital conversion. A random pulse-position-modulation (PPM) analog-to-digital conversion (ADC) architecture is proposed. A prototype 9-bit random PPM ADC incorporating a pseudo-random sampling scheme is implemented as proof of concept. This approach leverages the energy efficiency of time-based processing. The use of sampling techniques that exploit signal compressibility leads to further improvements in efficiency. The random PPM (pulse-position-modulation) ADC employs compressive sampling techniques to efficiently sample at sub-Nyquist rates. The sub-sampled signal is recovered using a reconstruction algorithm, which is tailored for practical hardware implementation. We develop a theoretical analysis of the hardware architecture and the reconstruction algorithm. Measurements of a prototype random PPM ADC and simulation, demonstrate this theory. The prototype successfully demonstra!
 tes a 90% reduction in sampling rate compared to the Nyquist rate for input signals that are 3% sparse in frequency domain.

-----------------------------------------------------------------------------------------------------------------

Wakin, M.; Becker, S.; Nakamura, E.; Grant, M.; Sovero, E.; Ching, D.; Yoo, J.; Romberg, J.; Emami-Neyestanak, A.; Candes, E., A Nonuniform Sampler for Wideband Spectrally-Sparse Environments
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6316045
We present a wide bandwidth, compressed sensing based nonuniform sampling (NUS) system with a custom sample-and-hold chip designed to take advantage of a low average sampling rate. By sampling signals nonuniformly, the average sample rate can be more than a magnitude lower than the Nyquist rate, provided that these signals have a relatively low information content as measured by the sparsity of their spectrum. The hardware design combines a wideband Indium-Phosphide heterojunction bipolar transistor sample-and-hold with a commercial off-the-shelf analog-to-digital converter to digitize an 800 MHz to 2 GHz band (having 100 MHz of noncontiguous spectral content) at an average sample rate of 236 Ms/s. Signal reconstruction is performed via a nonlinear compressed sensing algorithm, and the challenges of developing an efficient implementation are discussed. The NUS system is a general purpose digital receiver. As an example of its real signal capabilities, measured bit-error-rate!
  data for a GSM channel is presented, and comparisons to a conventional wideband 4.4 Gs/s ADC are made.

-----------------------------------------------------------------------------------------------------------------

Shapero, S.; Charles, A. S.; Rozell, C. J.; Hasler, P., Low Power Sparse Approximation on Reconfigurable Analog Hardware
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6313932
Compressed sensing is an important application in signal and image processing which requires solving nonlinear optimization problems. A Hopfield-Network-like analog system is proposed as a solution, using the locally competitive algorithm (LCA) to solve an overcomplete l1 sparse approximation problem. A scalable system architecture using sub-threshold currents is described, including vector matrix multipliers (VMMs) and a nonlinear thresholder. A 4 x 6 nonlinear system is implemented on the RASP 2.9v chip, a field programmable analog array with directly programmable floating gate elements, allowing highly accurate VMMs. The circuit successfully reproduced the outputs of a digital optimization program, converging to within 4.8% rms, and an objective value only 1.3% higher on average. The active circuit consumed 29 uA of current at 2.4 V, and converges on solutions in 240 us. A smaller 2 x 3 system is also implemented. Extrapolating the scaling trends to a N = 1000 node system!
 , the analog LCA compares favorably with state-of-the-art digital solutions, using a small fraction of the power to arrive at solutions ten times faster. Finally, we provide simulations of large scale systems to show the behavior of the system scaled to nontrivial problem sizes.

-----------------------------------------------------------------------------------------------------------------

Chen, X.; Sobhy, E. A.; Yu, Z.; Hoyos, S.; Silva-Martinez, J.; Palermo, S.; Sadler, B. M., A Sub-Nyquist Rate Compressive Sensing Data Acquisition Front-End
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6338302
This paper presents a sub-Nyquist rate data acquisition front-end based on compressive sensing theory. The front-end randomizes a sparse input signal by mixing it with pseudo-random number sequences, followed by analog-to-digital converter sampling at sub-Nyquist rate. The signal is then reconstructed using an L1-based optimization algorithm that exploits the signal sparsity to reconstruct the signal with high fidelity. The reconstruction is based on a priori signal model information, such as a multi-tone frequency-sparse model which matches the input signal frequency support. Wideband multi-tone test signals with 4% sparsity in 5~500 MHz band were used to experimentally verify the front-end performance. Single-tone and multi-tone tests show maximum signal to noise and distortion ratios of 40 dB and 30 dB, respectively, with an equivalent sampling rate of 1 GS/s. The analog front-end was fabricated in a 90 nm complementary metal-oxide-semiconductor process and consumes 55 mW!
 . The front-end core occupies 0.93mm2.

-----------------------------------------------------------------------------------------------------------------

Kong, X.; Matic, R.; Xu, Z.; Kukshya, V.; Petre, P.; Jensen, J., A Time-Encoding Machine Based High-Speed Analog-to-Digital Converter
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6363480
A time-encoding machine (TEM) based new analog-to-digital converter (ADC) architecture is presented in this paper. The main advantage of this architecture is that it relies on asynchronous process and removes an important performance limiting factor in conventional ADCs: the clock jitter. Therefore, this architecture is suitable for very high speed ADCs. To expand the bandwidth coverage, the compressive sensing techniques is employed to reconstruct sparse signals with very high frequency. The system can run under two different modes: the normal mode where the signal is sampled at above Nyquist rate and the compressive sensing mode. Nonidealities in circuits and system parameter setting tradeoffs are analyzed to determine the best parameters for the system to reach optimal performance.

-----------------------------------------------------------------------------------------------------------------

Maleh, R.; Fudge, G. L.; Boyle, F. A.; Pace, P. E., Analog-to-Information and the Nyquist Folding Receiver
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6355638
Recovering even a small amount of information from a broadband radio frequency (RF) environment using conventional analog-to-digital converter (ADC) technology is computationally complex and presents significant challenges. For sparse or compressible RF environments, an alternate approach to conventional sampling is analog-to-information (A2I) to enable sub-Nyquist rate sampling based on compressive sensing (CS) principles. This paper presents the Nyquist Folding Receiver (NYFR), an efficient A2I architecture that folds the broadband RF input prior to digitization by a narrowband ADC. The folding is achieved by undersampling the RF spectrum with a stream of short pulses that have a phase modulated sampling period. The undersampled signals then fold down into a low pass interpolation filter. The pulse sample time modulation induces a corresponding phase modulation on the received signals that is scaled by an integer modulation index that varies with the Nyquist zone (i.e., fo!
 ld number), allowing the signals to be separated based on the measured modulation index. Unlike many schemes motivated by CS that randomize the RF prior to digitization, the NYFR substantially preserves signal structure. This enables information recovery with very low computational complexity algorithms in addition to traditional CS reconstruction techniques. The paper includes a comparison of seven other A2I architectures with the NYFR.

-----------------------------------------------------------------------------------------------------------------

Wang, M.; Wu, J.; Shi, S. F.; LUO, C.; Wu, F., Fast Decoding and Hardware Design for Binary-Input Compressive Sensing
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6331566
Binary-input compressive sensing (BiCS) has recently been applied to wireless communications as a modulated coding scheme for seamless rate adaptation. Different from conventional channel codes which generate binary symbols with logical-OR (XOR) operations, BiCS generates multilevel symbols through weighted sum operation. Although BiCS can be decoded by message passing, it needs to compute the convolution of probability functions in each iteration. The high decoding complexity has prevented the technique from being applied to practical use. In this paper, we propose a fast BiCS decoding algorithm and its corresponding partial-parallel hardware design. In this algorithm, we first build lookup tables to solve the computationally intensive problem of convolution. Through these tables, we successfully convert the convolution of probabilities into the polynomial of some exponential terms. This key step allows us to use log-likelihood ratio as message in message passing decoding a!
 nd a fast algorithm is developed by approximate computing. We further design a partial-parallel hardware decoder. To avoid memory collision, we propose a multilevel cyclic-shift approach to generate the CS measurement matrix. We design horizontal unit processors with the proposed tables for iterative computing. Our analyses show that the proposed fast algorithm can reduce multiplications by nearly 90%. The decoding speed of our field-programmable gate array design reaches the range of communication rate in modern wireless networks.

-----------------------------------------------------------------------------------------------------------------

Orchard, G.; Zhang, J.; Suo, Y.; Dao, M.; Nguyen, D. T.; Chin, S.; Posch, C.; Tran, T. D.; Etienne-Cummings, R., Real Time Compressive Sensing Video Reconstruction in Hardware
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6328225
Compressive sensing has allowed for reconstruction of missing pixels in incomplete images with higher accuracy than was previously possible. Moreover, video data or sequences of images contain even more correlation, leading to a much sparser representation as demonstrated repeatedly in numerous digital video formats and international standards. Compressive sensing has inspired the design of a number of imagers which take advantage of the need to only subsample a scene, which reduces power consumption by requiring acquisition and transmission of fewer samples. In this paper, we show how missing pixels in a video sequence can be estimated using compressive sensing techniques. We present a real time implementation of our algorithm and show its application to an asynchronous time-based image sensor (ATIS) from the Austrian Institute of Technology. The ATIS only provides pixel intensity data when and where a change in pixel intensity is detected, however, noise randomly causes in!
 tensity changes to be falsely detected, thereby providing random samples of static regions of the scene. Unlike other compressive sensing imagers, which typically have pseudo-random sampling designed in at extra effort, the ATIS used here provides random samples as a side effect of circuit noise. Here, we describe and analyze a field-programmable gate array implementation of a matching pursuit (MP) algorithm for compressive sensing reconstruction capable of reconstructing over 1.9 million 8$,times,$ 8 pixel regions per second with a sparsity of 11 using a basis dictionary containing 64 elements. In our application to ATIS we achieve throughput of 28 frames per second at a resolution of 304 $,times,$240 pixels with reconstruction accuracy comparable to that of state of the art algorithms evaluated offline.

-----------------------------------------------------------------------------------------------------------------

Chen, J.; Cong, J.; Vese, L. A.; Villasenor, J.; Yan, M.; Zou, Y., A Hybrid Architecture for Compressive Sensing 3-D CT Reconstruction
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6338301
The radiation dose associated with computerized tomography (CT) is significant. Compressive sensing (CS) methods provide mathematical approaches to reduce the radiation exposure without sacrificing reconstructed image quality. However, the computational requirements of these algorithms is much higher than conventional image reconstruction approaches such as filtered back projection (FBP). This paper describes a new compressive sensing 3-D image reconstruction algorithm based on expectation maximization and total variation, termed EM+TV, and also introduces a promising hybrid architecture implementation for this algorithm involving the combination of a CPU, GPU, and FPGA. An FPGA is used to speed up the major computation kernel (EM), and a GPU is used to accelerate the TV operations. The performance results indicate that this approach provides lower energy consumption and better reconstruction quality, and illustrates an example of the advantages that can be realized through !
 domain-specific computing.

-----------------------------------------------------------------------------------------------------------------

Yoo, J.; Turnes, C.; Nakamura, E. B.; Le, C. K.; Becker, S.; Sovero, E. A.; Wakin, M. B.; Grant, M. C.; Romberg, J.; Emami-Neyestanak, A.; Candes, E., A Compressed Sensing Parameter Extraction Platform for Radar Pulse Signal Acquisition
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6311440
In this paper we present a complete (hardware/ software) sub-Nyquist rate (x 13) wideband signal acquisition chain capable of acquiring radar pulse parameters in an instantaneous bandwidth spanning 100 MHz-2.5 GHz with the equivalent of 8 effective number of bits (ENOB) digitizing performance. The approach is based on the alternative sensing-paradigm of compressed sensing (CS). The hardware platform features a fully-integrated CS receiver architecture named the random-modulation preintegrator (RMPI) fabricated in Northrop Grumman's 450 nm InP HBT bipolar technology. The software back-end consists of a novel CS parameter recovery algorithm which extracts information about the signal without performing full time-domain signal reconstruction. This approach significantly reduces the computational overhead involved in retrieving desired information which demonstrates an avenue toward employing CS techniques in power-constrained real-time applications. The developed techniques are!
  validated on CS samples physically measured by the fabricated RMPI and measurement results are presented. The parameter estimation algorithms are described in detail and a complete description of the physical hardware is given.

-----------------------------------------------------------------------------------------------------------------

Luo, C.; Borkar, M. A.; Redfern, A. J.; McClellan, J. H., Compressive Sensing for Sparse Touch Detection on Capacitive Touch Screens
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6303837
Capacitive touch screens are ubiquitous in today's electronic devices. Improved touch screen responsiveness and resolution can be achieved at the expense of the touch screen controller analog hardware complexity and power consumption. This paper proposes an alternative compressive sensing based approach to exploit the sparsity of simultaneous touches with respect to the number of sensor nodes to achieve similar levels of responsiveness. It is possible to reduce the analog data acquisition complexity at the cost of extra digital computations with less total power consumption. Using compressive sensing, in order to resolve the positions of the sparse touches, the number of measurements required is related to the number of touches rather than the number of nodes. Detailed measurement circuits and methodologies are presented along with the corresponding reconstruction algorithm.



JETCAS is published quarterly and solicits, with particular emphasis on emerging areas, special issues on topics that cover the entire scope of the IEEE Circuits and Systems (CAS) Society, namely the theory, analysis, modeling, design, automation, and implementation of electronic circuits and systems, spanning theoretical foundations, applications, and architectures for signal and information processing

________________________________________________________________________________________________________

     To unsubscribe from the JETCAS Newsletter, please send an email to cass_sysadmin@polito.it
_________________________________________________________________________________________________