Pham, Christopher

CHP

Email

Preferred: christopher.h.pham01@sjsu.edu

Alternate: pham.chris@outlook.com

Office Hours

7pm-7:30pm Tues-Thur Clark 202

 

Electrical Engineering Department

Licenses and Certificates

HONORS & AWARDS

2005 Asian American Engineer of the Year
2007 Awarded the Cisco company title “White Hat Hacker of the year”
2007 Education Medals by Department of Education (AsiaPac region)
Other 40+ awards from global private and government agencies
Featured on VOA, VOV, Wall Street Journal, media & news
Patents 7, 765, 174; 7, 293, 142; 7, 721, 265; 7, 930, 491; 6, 999, 952; US6999952, US20060107153, US 07293142, US 7293142 B1, US 20060107153 A1

Bio

Publication samples:

Network Vulnerability From Memory Abuse and Experimented Software Defect Detection

Jun Xu Christopher Hoang Pham
ISSRE 2002,
13th IEEE International Symposium on Software Reliability
Engineering (ISSRE 2003); Annapolis, Maryland; Nov. 12-15, 2002;
Chillarege Press; Copyright 2002
While majority of software developers are concerning about features, 
performance, CPU usage and similar criteria, many neglects memory
management as one of the most fundamental resources for software
operation. The consequence of this negligence is more severe than
it sounds, such as: the memory resource can be exhausted by malicious
applications leading to system malfunction, and the most vulnerable of
all is the risk of security attack.
To address some of the observed common run-time problems resulted from 
poor memory management, the authors developed tools to detect the problems
early in the development cycle and isolated them to source code level.
A practice was also experimented to alleviate the poor memory management
software defects in important phases of the SEI software development model
as a solution.
This paper shares with the reader the non-proprietary observed data, 
methods and technology that was developed and leveraged to address
some severe memory abuse issues in both off-line and run-time domains.

Less Intrusive Memory Leak Detection inside Kernel

Jun Xu , Xiangrong Wang , Christopher Pham 
Fast Abstract ISSRE 2003); 14th IEEE International Symposium on Software 
Reliability Engineering (ISSRE 2003); Denver, Colorado; Nov. 17-20, 2003;
Chillarege Press; Copyright 2003 (2 pages). 
Memory leak is a major resource issue which could lead to many system 
malfunctions and negative performance impacts. A memory leak occurs when
memory is not freed after use, or when the pointer to a memory allocation
is deleted, rendering the memory no longer usable. It can exhibit in many
forms, contiguously or fragmentally, in flatten memory architecture or
those with virtual space. Reckless use of dynamic memory allocation can
lead to memory management problems, which cause performance degradation,
unpredictable execution or crashes.

 

Memory leak detection system and method using contingency analysis

United States Patent 7293142, 
http://www.freepatentsonline.com/7293142.html 
Xu, Jun (Cupertino, CA, US) , Wang, Xiangrong (Milpitas, CA, US) , 
Pham, Christopher H. (Milpitas, CA, US) , Goli, Srinivas (San Jose, CA, US) 
The present invention relates to testing of hardware and software, 
and particularly to the detection and identification of memory leaks
in software.
 

An effective method to detect software memory leakage leveraged from neuroscience principles governing human memory behavior

Xiangrong Wang ; Xu, J. ; Pham, C.H.
Published in: Software Reliability Engineering, 2004. ISSRE 2004. 15th International Symposium; 
Page(s):329 - 339, Print ISBN:0-7695-2215-7
Software memory leakage accounts for many dynamic system problems ranging 
from minor performance deterioration to major system crash due to low
memory, security exploitation or other side effects. General purpose
commercial static and dynamic memory leak analysis tools are available
for common operating systems. However, these tools normally produce
high noise ratio of warning messages that require many human hours
to review and eliminate false-positive alarms. In-house tools for
proprietary platforms with special memory architectures also face
the same limitation. Human memory on the parallel path has been studied
by neuroscientists and well documented along with the governing behavioral
mathematic expressions. Some studies from neuroscience inspired us towards
a new approach to resolve the software memory leak issues that were
occurring in our proprietary operating system. The results of our study
and experiment not only allowed us to create a method to accurately
detect memory leaks as a starting point, but also laid out a roadmap
for future work in this area by applying the neuroscience findings into
computer software to detect and control the system resources. We hope
our findings and experience will help others to decrease the effort of
fighting against system memory leak, whether starting from scratch, or
as a reference to improve the existing tools to reduce the reporting
noise ratio. In this paper, we will walk through our mapping of Cue,
Recognition and Recall used in Kahana's neuroscience method [2000] to
the similar memory elements of our target operating system, and how
we applied Yule's Q equation to accurately pinpoint the memory leak in
our source code and how we continuously fine tune the noise threshold.
Our immediate road map shows a mathematic model to predict the system
memory resource behavior and how we will apply it to our memory leak
detection tool to help prolong system availabilitty.

Linear associative memory-based hardware architecture for fault tolerant ASIC/FPGA work-around

United States Patent 6999952, 
http://www.freepatentsonline.com/6999952.html
A programmable logic unit (e.g., an ASIC or FPGA) having a feedforward 
linear associative memory (LAM) neural network checking circuit which
classifies input vectors to a faulty hardware block as either good or
not good and, when a new input vector is classified as not good, blocks
a corresponding output vector of the faulty hardware block, enables a
software work-around for the new input vector, and accepts the software
work-around input as the output vector of the programmable logic circuit.
The feedforward LAM neural network checking circuit has a weight matrix
whose elements are based on a set of known bad input vectors for said
faulty hardware block. The feedforward LAM neural network checking circuit
may update the weight matrix online using one or more additional bad input
vectors. A discrete Hopfield algorithm is used to calculate the weight
matrix W. The feedforward LAM neural network checking circuit calculates
an output vector a(m) by multiplying the weight matrix W by the new input
vector b(m), that is, a(m)=wb(m), adjusts elements of the output vector
a(m) by respective thresholds, and processes the elements using a
plurality of non-linear units to provide an output of 1 when a given
adjusted element is positive, and provide an output of 0 when a given
adjusted element is not positive. If a vector constructed of the outputs
of these non-linear units matches with an entry in a content-addressable
memory (CAM) storing the set of known bad vectors (a CAM hit), then the
new input vector is classified as not good.
 

 

Performing high efficiency source code static analysis with intelligent extensions

Published in: Software Engineering Conference, 2004. 11th Asia-Pacific
Page(s):346 - 355, Print ISBN:0-7695-2245-9
This paper presents an industry practice for highly efficient source code 
analysis to promote software quality. As a continuous work of previously
reported source code analysis system, we researched and developed a few
engineering-oriented intelligent extensions to implement more cost-effective
extended code static analysis and engineering processes. These include an
integrated empirical scan and filtering tool for highly accurate noise
reduction, and a new code checking test tool to detect function call
mismatch problems, which may lead to many severe software defects. We also
extended the system with an automated defect filing and verification
procedure. The results show that, for a huge code base of millions of lines,
our intelligent extensions not only contribute to the completeness and
effectiveness of static analysis, but also establish significant
engineering productivity.
 

Recent Advances in Data Mining for Categorizing Text Records

W. Chaovalitwongse, Hoang PhamSeheon HwangZ. LiangC.H. Pham
Book Title Recent Advances in Reliability and Quality in Design
Book Part V, Pages  423-440, Print ISBN978-1-84800-112-1
In a world with highly competitive markets, there is a great need in almost 
all business organizations to develop a highly effective coordination and
decision support tool that can be used to become a daily life predictive
enterprise to direct, optimize and automate specific decision-making
processes. The improved decision-making support can help people to examine
data on the past circumstances and present events, as well as project
future actions, which will continually improve the quality of products
or services. Such improvement has been driven by recent advances in
digital data collection and storage technology. The new technology in data
collection has resulted in the growth of massive databases, also known as
data avalanches. These rapidly growing databases occur in various
applications including service industry, global supply chain organizations,
air traffic control, nuclear reactors, aircraft fly-by-wire, real time
sensor networks, industrial process control, hospital healthcare, and
security systems. The massive data, especially text records, on one hand,
may contain a great wealth of knowledge and information, but on the other
hand, contain other information that may not be reliable due to many
uncertainty reasons in our changing environments. However, manually
classifying thousands of text records according to their contents can be
demanding and overwhelming. Data mining has gained a lot of attention from
researchers and practitioners over the past decade as an emerging research
area in finding meaningful patterns to make sense out of massive data sets.

 

An Effective Low Cost Whitebox Approach to Construct System Level Test Vectors to Detect Buffer Overflow Defects

Fast Abstract ISSRE 2003, 
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.147.2339&rep=rep1&type=pdf 
Buffer overflow continues to lead the list of 60% of the recent 2002 CERT 
advisories. In line with the effort to solve this problem, the paper shares
a low cost method as has been used effectively to construct the test vectors
to detect online buffer overflow software defects. 

 

Leverage Sport Management and Coaching to Win High Tech Global Game

Pham, C.H. ; Palla, M. ; Houshmand, K. ; Goli, S. ; Esmaili, R.
Published in: Engineering Management Conference, 2006 IEEE International
Page(s):94 - 100, Print ISBN:1-4244-0285-9
In IEMC 2004 publication, the paper "Using Sport Analogy in High-Tech 
Management to Improve Productivity by Improving Personal and Team
Performance" [1] received amazing number of feedback. Further communication
with readers inspires the authors to continue with this contribution to
provide more insight of our coaching and development process to build,
support and continually renew the team. The paper describes the management
strategy to prepare, win and sustain the global high-tech games with a
team composed of high performers with complement skill sets and experience
from all regions of the so-call "flat world". Again, sport analogy is used
in the same style as the first paper to deliver the points across.
 

Using sport analogy in high-tech management to improve productivity by improving personal and team performance

 K. Houshmand ; ARF, Cisco Syst., Inc., San Jose, CA, USA ; S. GoIi ; 
R. Esmaili ; C. H. Pham
Published in: 
Engineering Management Conference, 2004. Proceedings. 2004 IEEE International  
(Volume:1 ), Page(s):11 - 15 Vol.1Print ISBN:0-7803-8519-5
Improve personal and team performance is the main focus of sports management.
Many similarities exist in the high tech industry with high demand for
creativity and higher productivity, which translates into higher personal
and team performance. We would like to share our experience from our
management ABC-GOAL-FIRST strategy and S-I-R business operation model that
we used from 2001 to 2004. The creative strategy which focuses on personal
and team performance improvement based on the same analogy of sports
management has helped expand our team's charter from a regression facility
at the tail end of the software life cycle to a bigger company-wide scope.
The expanded charter allows the team to be involved in all phases of the
software life cycle collaboratively and cross-functionally to maximize our
contribution while leveraging other organizational expertise. We also would
like to share the analysis, the metrics and the positive human affect of
the proven management strategy that helped to increase our team's
productivity by six folds during the past four years.
 

Key Foundations to Successfully Build and Manage Productive Global Virtual Teams

Palla, M. ; Pham, C. ; Houshmand, K. ; Jose, J.P. ; Vedamoorthy, M.
Published in:Engineering Management Conference, 2006 IEEE International
Page(s):101 - 105, Print ISBN:1-4244-0285-9
In this paper we share our experience of how working together corporations
can create "One Team" consisting of many virtual teams, and overcome the
challenges posed by the global economy.


 

Turning and Managing Innovation into Automation for Higher Competitive Productivity

Palla, M. ; Cisco Syst., Inc., San Jose, CA ; Hu, B. ; Houshmand, K. ; 
Pham, C.
Published in: 
Management of Innovation and Technology, 2006 IEEE International Conference on  
(Volume:2 ), Page(s):1048 - 1052, Print ISBN:1-4244-0147-X
Innovation boosts productivity and drives prosperity. In the new 
millennium's competitive high tech environment, innovation allows
companies to stay competitive in their own sectors while enabling them
to advance further into additional areas. Effective leaders rely on and
leverage both people skills and automated machine power to maximize the
team productivity. While innovators prove the tools and best practices
to boost productivity, adaptors integrate these innovations in daily life
environment and realize the maximum long term gains for the company.
While flexible environment prosper innovation, stable structure nurtures
adaptation. There are a number of challenges that leaders have to face
in order to first facilitate an environment for managed innovation, then
finally automate and integrate it into the process. This paper discusses
the challenges faced by the Advanced Regression/Research Facility (ARF)
at Cisco Systems in leading and creating an environment that embraces
changes seamlessly. In order to complement the traditional regression
testing, innovation concepts were proven, automated and integrated into
ARF's daily business to improve its defect finding rate. The innovations
resulted in changes and introduced management challenges but paid handsome
dividend in the end by improving its productivity 900% within 6 years in
an information-sharing and individual-recognition environment.


Extend the meaning of "R" to "R4" in ART (automated software regression technology) to improve quality and reduce R&D and production costs

Houshmand, K. ; ARF, Cisco Syst. Inc., San Jose, CA, USA ; Goli, S. ; 
Esmaili, R. ; Pham, C.H.
Published in: 
Engineering Management Conference, 2004. Proceedings. 2004 IEEE International  
(Volume:1 ), Page(s):70 - 74 Vol.1, Print ISBN:0-7803-8519-5
Regression testing has been conventionally employed to check the 
effectiveness of a solution, track existing issues and any new issues
created by the result of fixing the old issues. Positioned at the tail
end of the software cycle, regression testing technology can hardly
influence or contribute to earlier phases such as architect, design,
implementation or device testing. Extending the "R" in ART to R
4(regression,
research, retain & grow expertise and early exposure) has been proving.
R
4 is not only providing ART with more powerful tools to detect issues
as early as in the architect phase, but also arming R&D software with
more proactive practices to avoid costly catastrophic problems from
propagating to customer sites. This paper attempts to share some best
practices and contributions from Cisco-ARF (a Cisco automated
regression/research facility) whose charter is to ensure the quality
of product lines running on tens of million lines of code. These
award-winning practices have proven to save multi-million dollars
in repair costs, thousands of engineering hours, and continue to
set the higher standards for testing technology under proactive
leadership and management to gain higher quality and customer satisfaction.
 

Links

EE COURSES CONDUCTED SINCE 1997

EE104. Applied Programming in Electrical Engineering   Syllabus [PDF] 

EE118. Digital Logic Circuit Design Syllabus [PDF]

EE120. Microprocessor Based System Design  Syllabus [PDF] 

EE138. Introduction to Embedded Control System Design  Syllabus [PDF] 

EE176. Computer Organization  Syllabus [PDF] 

EE177. Digital System Interfacing  Syllabus [PDF]

EE178. Digital Design with FPGA’s  Syllabus [PDF] 

EE179. Digital Design Using Hardware Description Languages Syllabus [PDF]

EE180. Individual Studies  Syllabus [PDF]

EE189. Special Topics in Electrical Engineering  Syllabus [PDF]

EE242.  Embedded Hardware Design  Syllabus [PDF] 

EE270.  Advanced Logic Design  Syllabus [PDF] 

EE271.  Digital System Design and Synthesis Syllabus [PDF] 

EE275.  Advanced Computer Architectures  Syllabus [PDF] 

EE 279 - Special Topics in Digital Systems   Syllabus [PDF]