Big Data2015. 4. 14. 11:55

JDBC를 사용하기 위해서는 JDBC를 받아야 겠죠? 
JDBC를 다운 받도록 하겠습니다. 

URL : https://mariadb.com/kb/en/mariadb/about-the-mariadb-java-client/


파일이 못받으시겠다면 첨부 파일을 통해서 받으세요. 

mariadb-java-client-1.1.8.jar


만약 Maven을 사용 하시는 분이라면 mariadb로 검색 하셔서 설치 하셔도 됩니다.


java sample source

 try{
          Class.forName("org.mariadb.jdbc.Driver");  

        Connection connection = DriverManager.getConnection(  
                "jdbc:mariadb://localhost:3306/project", "root", "");  
        Statement statement = connection.createStatement(); 

        String uname="xyz",pass="abc";
       statement.executeUpdate("insert into user values('"+uname+"','"+pass+"')");}//end of try block


Posted by 원찬식
Big Data2015. 4. 14. 11:24

Maria DB를 맥 OS에 설치 하려고 사이트에 접속 했는데 파일이 없네.. 

그래서 brew를 활용하여 설치를 해 보겠다. 


설치 하기 전에 앞서 brew를 업데이트 한다. 

1. brew update


이번엔 Maria DB를 설치 해 보겠다. 

2. brew install mariadb

아래과 같은 화면이 나타나면 설치 끝 


기본적으로 설치 경로가 이렇게 되겠다
/usr/local/Cellar/mariadb/10.0.17


이제 mariadb를 실행 시켜 보자 

3. mysql.server start

SUCCESS!가 나오면 실행 완료!!


자 그럼 이제 로그인을 해볼까?

4.mysql -uroot


기본적인 접속 까지 모두 완료!! 
다음 시간에 JDBC를 활용해 소스 코드로 접속을 해 보는 시간을 갖겠다... 

Posted by 원찬식
Big Data2015. 3. 5. 20:57

./configure

make

su

make install

adduser postgres

mkdir /usr/local/pgsql/data

chown postgres /usr/local/pgsql/data

su - postgres

/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data

/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &

/usr/local/pgsql/bin/createdb test

/usr/local/pgsql/bin/psql test



/etc/rc.d/init.d/postgresql start
명령어를 통해 가동시키자.

Posted by 원찬식
Big Data2015. 3. 5. 20:06


 COPY문을 활용하여 파일을 읽어 들이고 DB에 일괄 저장 하는 방식


====================================================================================

Example

====================================================================================

//TABLE 설계

CREATE TABLE EX_LOG

(

IDX DATE,

NUM CHAR(7),

CONTENTS_ID CHAR(21),

LEVEL1 CHAR(20),

CHRG_AMT INTEGER,

MSG_PATTERN INTEGER

);


//File을 읽어 들여서 COPY문을 활용하여 DB에 적재

COPY EX_LOG (IDX, NUM, CONTENTS_ID, LEVEL1, CHRG_AMT, MSG_PATTERN)

FROM '/usr/pgsql-9.4/data/poc_data2.txt'


//문서 파일이 EUC-KR이라서 UTF-8로 변환

iconv -f EUC-KR -t UTF-8 poc_data.txt > poc_data2.txt


Posted by 원찬식
Big Data2015. 1. 16. 16:05




#!/bin/bash

#

# Postgres-XC Configuration file for pgxc_ctl utility. 

#

# Configuration file can be specified as -c option from pgxc_ctl command.   Default is

# $PGXC_CTL_HOME/pgxc_ctl.org.

#

# This is bash script so you can make any addition for your convenience to configure

# your Postgres-XC cluster.

#

# Please understand that pgxc_ctl provides only a subset of configuration which pgxc_ctl

# provide.  Here's several several assumptions/restrictions pgxc_ctl depends on.

#

# 1) All the resources of pgxc nodes has to be owned by the same user.   Same user means

#    user with the same user name.  User ID may be different from server to server.

#    This must be specified as a variable $pgxcOwner.

#

# 2) All the servers must be reacheable via ssh without password.   It is highly recommended

#    to setup key-based authentication among all the servers.

#

# 3) All the databases in coordinator/datanode has at least one same superuser.  Pgxc_ctl

#    uses this user to connect to coordinators and datanodes.   Again, no password should

#    be used to connect.  You have many options to do this, pg_hba.conf, pg_ident.conf and

#    others.  Pgxc_ctl provides a way to configure pg_hba.conf but not pg_ident.conf.   This

#    will be implemented in the later releases.

#

# 4) Gtm master and slave can have different port to listen, while coordinator and datanode

#    slave should be assigned the same port number as master.

#

# 5) Port nuber of a coordinator slave must be the same as its master.

#

# 6) Master and slave are connected using synchronous replication.  Asynchronous replication

#    have slight (almost none) chance to bring total cluster into inconsistent state.

#    This chance is very low and may be negligible.  Support of asynchronous replication

#    may be supported in the later release.

#

# 7) Each coordinator and datanode can have only one slave each.  Cascaded replication and

#    multiple slave are not supported in the current pgxc_ctl.

#

# 8) Killing nodes may end up with IPC resource leak, such as semafor and shared memory.

#    Only listening port (socket) will be cleaned with clean command.

#

# 9) Backup and restore are not supported in pgxc_ctl at present.   This is a big task and

#    may need considerable resource.

#

#========================================================================================

#

#

# pgxcInstallDir variable is needed if you invoke "deploy" command from pgxc_ctl utility.

# If don't you don't need this variable.

pgxcInstallDir=/usr/local/pgsql

#---- OVERALL -----------------------------------------------------------------------------

#

pgxcOwner=postgres # owner of the Postgres-XC databaseo cluster.  Here, we use this

# both as linus user and database user.  This must be

# the super user of each coordinator and datanode.

pgxcUser=$pgxcOwner # OS user of Postgres-XC owner


tmpDir=/tmp # temporary dir used in XC servers

localTmpDir=$tmpDir # temporary dir used here locally


configBackup=n # If you want config file backup, specify y to this value.

configBackupHost=pgxc-linker # host to backup config file

configBackupDir=$HOME/pgxc # Backup directory

configBackupFile=pgxc_ctl.bak # Backup file name --> Need to synchronize when original changed.


#---- GTM ------------------------------------------------------------------------------------


# GTM is mandatory.  You must have at least (and only) one GTM master in your Postgres-XC cluster.

# If GTM crashes and you need to reconfigure it, you can do it by pgxc_update_gtm command to update

# GTM master with others.   Of course, we provide pgxc_remove_gtm command to remove it.  This command

# will not stop the current GTM.  It is up to the operator.


#---- Overall -------

gtmName=gtm


#---- GTM Master -----------------------------------------------


#---- Overall ----

gtmMasterServer=localhost

gtmMasterPort=20001

gtmMasterDir=/usr/local/pgsql/data/gtm


#---- Configuration ---

gtmExtraConfig=none # Will be added gtm.conf for both Master and Slave (done at initilization only)

gtmMasterSpecificExtraConfig=none # Will be added to Master's gtm.conf (done at initialization only)


#---- GTM Slave -----------------------------------------------


# Because GTM is a key component to maintain database consistency, you may want to configure GTM slave

# for backup.


#---- Overall ------

gtmSlave=n # Specify y if you configure GTM Slave.   Otherwise, GTM slave will not be configured and

# all the following variables will be reset.

gtmSlaveServer=node12 # value none means GTM slave is not available.  Give none if you don't configure GTM Slave.

gtmSlavePort=20001 # Not used if you don't configure GTM slave.

gtmSlaveDir=$HOME/pgxc/nodes/gtm # Not used if you don't configure GTM slave.

# Please note that when you have GTM failover, then there will be no slave available until you configure the slave

# again. (pgxc_add_gtm_slave function will handle it)


#---- Configuration ----

gtmSlaveSpecificExtraConfig=none # Will be added to Slave's gtm.conf (done at initialization only)


#---- GTM Proxy -------------------------------------------------------------------------------------------------------

# GTM proxy will be selected based upon which server each component runs on.

# When fails over to the slave, the slave inherits its master's gtm proxy.  It should be

# reconfigured based upon the new location.

#

# To do so, slave should be restarted.   So pg_ctl promote -> (edit postgresql.conf and recovery.conf) -> pg_ctl restart

#

# You don't have to configure GTM Proxy if you dont' configure GTM slave or you are happy if every component connects

# to GTM Master directly.  If you configure GTL slave, you must configure GTM proxy too.


#---- Shortcuts ------

gtmProxyDir=$HOME/pgxc/nodes/gtm_pxy


#---- Overall -------

gtmProxy=n # Specify y if you conifugre at least one GTM proxy.   You may not configure gtm proxies

# only when you dont' configure GTM slaves.

# If you specify this value not to y, the following parameters will be set to default empty values.

# If we find there're no valid Proxy server names (means, every servers are specified

# as none), then gtmProxy value will be set to "n" and all the entries will be set to

# empty values.

gtmProxyNames=(gtm_pxy1 gtm_pxy2 gtm_pxy3 gtm_pxy4) # No used if it is not configured

gtmProxyServers=(node06 node07 node08 node09) # Specify none if you dont' configure it.

gtmProxyPorts=(20001 20001 20001 20001) # Not used if it is not configured.

gtmProxyDirs=($gtmProxyDir $gtmProxyDir $gtmProxyDir $gtmProxyDir) # Not used if it is not configured.


#---- Configuration ----

gtmPxyExtraConfig=none # Extra configuration parameter for gtm_proxy.  Coordinator section has an example.

gtmPxySpecificExtraConfig=(none none none none)


#---- Coordinators ----------------------------------------------------------------------------------------------------


#---- shortcuts ----------

coordMasterDir=/usr/local/pgsql/data/coord

coordSlaveDir=/usr/local/pgsql/data/coord_slave

coordArchLogDir=/usr/local/pgsql/data/coord_archlog


#---- Overall ------------

coordNames=(coord1) # Master and slave use the same name

coordPorts=(5432) # Master and slave use the same port

poolerPorts=(20010) # Master and slave use the same pooler port

coordPgHbaEntries=(10.211.55.2/24) # Assumes that all the coordinator (master/slave) accepts

# the same connection

# This entry allows only $pgxcOwner to connect.

# If you'd like to setup another connection, you should

# supply these entries through files specified below.

# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust".   If you don't want

# such setups, specify the value () to this variable and suplly what you want using coordExtraPgHba

# and/or coordSpecificExtraPgHba variables.


#---- Master -------------

coordMasterServers=(localhost) # none means this master is not available

coordMasterDirs=($coordMasterDir/coord1)

coordMaxWALsernder=1 # max_wal_senders: needed to configure slave. If zero value is specified,

# it is expected to supply this parameter explicitly by external files

# specified in the following. If you don't configure slaves, leave this value to zero.

coordMaxWALSenders=($coordMaxWALsernder)

# max_wal_senders configuration for each coordinator.


#---- Slave -------------

coordSlave=n # Specify y if you configure at least one coordiantor slave.  Otherwise, the following

# configuration parameters will be set to empty values.

# If no effective server names are found (that is, every servers are specified as none),

# then coordSlave value will be set to n and all the following values will be set to

# empty values.

coordSlaveSync=y # Specify to connect with synchronized mode.

coordSlaveServers=(node07) # none means this slave is not available

coordSlaveDirs=($coordSlaveDir/coordSlave1)

coordArchLogDirs=($coordArchLogDir/coordSlave1)


#---- Configuration files---

# Need these when you'd like setup specific non-default configuration 

# These files will go to corresponding files for the master.

# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries 

# Or you may supply these files manually.

coordExtraConfig=coordExtraConfig # Extra configuration file for coordinators.  

# This file will be added to all the coordinators'

# postgresql.conf

# Pleae note that the following sets up minimum parameters which you may want to change.

# You can put your postgresql.conf lines here.

cat > $coordExtraConfig <<EOF

#================================================

# Added to all the coordinator postgresql.conf

# Original: $coordExtraConfig

log_destination = 'stderr'

logging_collector = on

log_directory = 'pg_log'

listen_addresses = '*'

max_connections = 100

EOF


# Additional Configuration file for specific coordinator master.

# You can define each setting by similar means as above.

coordSpecificExtraConfig=(none)

coordExtraPgHba=none # Extra entry for pg_hba.conf.  This file will be added to all the coordinators' pg_hba.conf

coordSpecificExtraPgHba=(none)


#----- Additional Slaves -----

#

# Please note that this section is just a suggestion how we extend the configuration for

# multiple and cascaded replication.   They're not used in the current version.

#

coordAdditionalSlaves=n # Additional slave can be specified as follows: where you

coordAdditionalSlaveSet=(cad1) # Each specifies set of slaves.   This case, two set of slaves are

# configured

cad1_Sync=n   # All the slaves at "cad1" are connected with asynchronous mode.

# If not, specify "y"

# The following lines specifies detailed configuration for each

# slave tag, cad1.  You can define cad2 similarly.

cad1_Servers=(node08 node09 node06 node07) # Hosts

cad1_dir=$HOME/pgxc/nodes/coord_slave_cad1

cad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)

cad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1

cad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)



#---- Datanodes -------------------------------------------------------------------------------------------------------


#---- Shortcuts --------------

datanodeMasterDir=/usr/local/pgsql/data/dn_master

datanodeSlaveDir=$HOME/usr/local/pgsql/data/dn_slave

datanodeArchLogDir=$HOME/usr/local/pgsql/data/datanode_archlog


#---- Overall ---------------

#primaryDatanode=datanode1 # Primary Node.

# At present, xc has a priblem to issue ALTER NODE against the primay node.  Until it is fixed, the test will be done

# without this feature.

primaryDatanode=dn1 # Primary Node.

datanodeNames=(dn1 dn2)

datanodePorts=(5433 5434) # Master and slave use the same port!

datanodePoolerPorts=(20011 20012) # Master and slave use the same port!

datanodePgHbaEntries=(10.211.55.2/24) # Assumes that all the coordinator (master/slave) accepts

# the same connection

# This list sets up pg_hba.conf for $pgxcOwner user.

# If you'd like to setup other entries, supply them

# through extra configuration files specified below.

# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust".   If you don't want

# such setups, specify the value () to this variable and suplly what you want using datanodeExtraPgHba

# and/or datanodeSpecificExtraPgHba variables.


#---- Master ----------------

datanodeMasterServers=(localhost localhost) # none means this master is not available.

# This means that there should be the master but is down.

# The cluster is not operational until the master is

# recovered and ready to run.

datanodeMasterDirs=($datanodeMasterDir/dn1 $datanodeMasterDir/dn2)

datanodeMaxWalSender=2 # max_wal_senders: needed to configure slave. If zero value is 

# specified, it is expected this parameter is explicitly supplied

# by external configuration files.

# If you don't configure slaves, leave this value zero.

datanodeMaxWALSenders=($datanodeMaxWalSender $datanodeMaxWalSender)

# max_wal_senders configuration for each datanode


#---- Slave -----------------

datanodeSlave=n # Specify y if you configure at least one coordiantor slave.  Otherwise, the following

# configuration parameters will be set to empty values.

# If no effective server names are found (that is, every servers are specified as none),

# then datanodeSlave value will be set to n and all the following values will be set to

# empty values.

datanodeSlaveServers=(node07 node08 node09 node06) # value none means this slave is not available

datanodeSlaveSync=y # If datanode slave is connected in synchronized mode

datanodeSlaveDirs=($datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir)

datanodeArchLogDirs=( $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir )


# ---- Configuration files ---

# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries here.

# These files will go to corresponding files for the master.

# Or you may supply these files manually.

datanodeExtraConfig=none # Extra configuration file for datanodes.  This file will be added to all the 

# datanodes' postgresql.conf

datanodeSpecificExtraConfig=(none none none none)

datanodeExtraPgHba=none # Extra entry for pg_hba.conf.  This file will be added to all the datanodes' postgresql.conf

datanodeSpecificExtraPgHba=(none none none none)


#----- Additional Slaves -----

datanodeAdditionalSlaves=n # Additional slave can be specified as follows: where you

# datanodeAdditionalSlaveSet=(dad1 dad2) # Each specifies set of slaves.   This case, two set of slaves are

# configured

# dad1_Sync=n   # All the slaves at "cad1" are connected with asynchronous mode.

# If not, specify "y"

# The following lines specifies detailed configuration for each

# slave tag, cad1.  You can define cad2 similarly.

# dad1_Servers=(node08 node09 node06 node07) # Hosts

# dad1_dir=$HOME/pgxc/nodes/coord_slave_cad1

# dad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)

# dad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1

# dad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)


#---- WAL archives -------------------------------------------------------------------------------------------------

walArchive=n # If you'd like to configure WAL archive, edit this section.

# Pgxc_ctl assumes that if you configure WAL archive, you configure it

# for all the coordinators and datanodes.

# Default is "no".   Please specify "y" here to turn it on.

#

# End of Configuration Section

#

#==========================================================================================================================


#========================================================================================================================

# The following is for extension.  Just demonstrate how to write such extension.  There's no code

# which takes care of them so please ignore the following lines.  They are simply ignored by pgxc_ctl.

# No side effects.

#=============<< Beginning of future extension demonistration >> ========================================================

# You can setup more than one backup set for various purposes, such as disaster recovery.

walArchiveSet=(war1 war2)

war1_source=(master) # you can specify master, slave or ano other additional slaves as a source of WAL archive.

# Default is the master

wal1_source=(slave)

wal1_source=(additiona_coordinator_slave_set additional_datanode_slave_set)

war1_host=node10 # All the nodes are backed up at the same host for a given archive set

war1_backupdir=$HOME/pgxc/backup_war1

wal2_source=(master)

war2_host=node11

war2_backupdir=$HOME/pgxc/backup_war2

#=============<< End of future extension demonistration >> ========================================================



 pgxc_ctl.conf



Posted by 원찬식
Big Data2015. 1. 12. 18:14

1. install Package

yum install readline-devel

yum install zlib-devel

yum install flex

yum install bison

yum install jade

yum install docbook-style-dsssl


2. PostgreSQL install

gmake

gmake install


adduser postgres


3. pgxc_ctl install

cd contrib/pgxc_ctl

make

make install

export PATH=$PATH:/usr/local/pgsql/bin  



make distclean




pslq

1. Query each datanode

DN1 : psql -h localhost -p 5433 -U postgres -W -d wonword

DN2 : psql -h localhost -p 5434 -U postgres -W -d wonword

Posted by 원찬식
Big Data2015. 1. 6. 14:30

YUM Installation

PostgreSQL can be installed using RPMs (binary) or SRPMs (source) managed by YUM. This is available for the following Linux distributions (both 32- and 64-bit platforms; for the current release and prior release or two):

  • Fedora
  • Red Hat Enterprise Linux
  • CentOS
  • Scientific Linux
  • Oracle Enterprise Linux

See links from the main repository, http://yum.postgresql.org:

Contents

 [hide

Instructions

Configure your YUM repository

Locate and edit your distributions .repo file, located:

  • On Fedora: /etc/yum.repos.d/fedora.repo and /etc/yum.repos.d/fedora-updates.repo[fedora] sections
  • On CentOS: /etc/yum.repos.d/CentOS-Base.repo[base] and [updates] sections
  • On Red Hat: /etc/yum/pluginconf.d/rhnplugin.conf [main] section

To the section(s) identified above, you need to append a line (otherwise dependencies might resolve to the postgresql supplied by the base repository):

exclude=postgresql*

Install PGDG RPM file

A PGDG file is available for each distribution/architecture/database version combination. Browse http://yum.postgresql.org and find your correct RPM. For example, to install PostgreSQL 9.4 on CentOS 6 64-bit:

yum localinstall http://yum.postgresql.org/9.4/redhat/rhel-6-x86_64/pgdg-centos94-9.4-1.noarch.rpm

Install PostgreSQL

To list available packages:

yum list postgres*

For example, to install a basic PostgreSQL 9.4 server:

yum install postgresql94-server

Other packages can be installed according to your needs.

Post-installation commands

After installing the packages, a database needs to be initialized and configured.

In the commands below, the value of <name> will vary depending on the version of PostgreSQL used.

For PostgreSQL version 9.0 and above, the <name> includes the major.minor version of PostgreSQL, e.g., postgresql-9.4

For versions 8.x, the <name> is always postgresql (without the version signifier).

Data Directory

The PostgreSQL data directory contains all of the data files for the database. The variable PGDATA is used to reference this directory.

For PostgreSQL version 9.0 and above, the default data directory is:

/var/lib/pgsql/<name>/data

For example:

/var/lib/pgsql/9.4/data

For versions 7.x and 8.x, default data directory is:

/var/lib/pgsql/data/

Initialize

The first command (only needed once) is to initialize the database in PGDATA.

service <name> initdb

E.g. for version 9.4:

service postgresql-9.4 initdb

If the previous command did not work, try directly calling the setup binary, located in a similar naming scheme:

/usr/pgsql-y.x/bin/postgresqlyx-setup initdb

E.g. for version 9.4:

/usr/pgsql-9.4/bin/postgresql94-setup initdb

Startup

If you want PostgreSQL to start automatically when the OS starts:

chkconfig <name> on

E.g. for version 9.4:

chkconfig postgresql-9.4 on

Control service

To control the database service, use:

service <name> <command>

where <command> can be:

  • start : start the database
  • stop : stop the database
  • restart : stop/start the database; used to read changes to core configuration files
  • reload : reload pg_hba.conf file while keeping database running


E.g. to start version 9.4:

service postgresql-9.4 start

Removing

To remove everything:

yum erase postgresql94*

Or remove individual packages as desired.



출처 : https://wiki.postgresql.org/wiki/YUM_Installation

Posted by 원찬식
Big Data2014. 11. 21. 12:12

key 값에 -(dash)가 들어 가면 RESP 포맷에 위배 되서 아래 에러를 발생

 WRONGTYPE Operation against a key holding the wrong kind of value


참고 : http://redis.io/topics/protocol

RESP Errors

RESP has a specific data type for errors. Actually errors are exactly like RESP Simple Strings, but the first character is a minus '-' character instead of a plus. The real difference between Simple Strings and Errors in RESP is that errors are treated by clients as exceptions, and the string that composes the Error type is the error message itself.

The basic format is:

"-Error message\r\n"

Error replies are only sent when something wrong happens, for instance if you try to perform an operation against the wrong data type, or if the command does not exist and so forth. An exception should be raised by the library client when an Error Reply is received.

The following are examples of error replies:

-ERR unknown command 'foobar'
-WRONGTYPE Operation against a key holding the wrong kind of value

The first word after the "-", up to the first space or newline, represents the kind of error returned. This is just a convention used by Redis and is not part of the RESP Error format.

For example, ERR is the generic error, while WRONGTYPE is a more specific error that implies that the client tried to perform an operation against the wrong data type. This is called an Error Prefix and is a way to allow the client to understand the kind of error returned by the server without to rely on the exact message given, that may change over the time.

A client implementation may return different kind of exceptions for different errors, or may provide a generic way to trap errors by directly providing the error name to the caller as a string.

However, such a feature should not be considered vital as it is rarely useful, and a limited client implementation may simply return a generic error condition, such as false.


Posted by 원찬식
Big Data2014. 10. 29. 10:22

include <stdio.h>
#define  A 5
#define  B 2
#define N 10
__global__ void functionG(float *input, float *output);
void cudaErr(const char *msg);
main(){
printf("hello CUDA\n");
float *x_h, *y_h, *x_d, *y_d;
size_t memSize = sizeof(float) * N;
x_h = (float *)malloc(memSize);y_h = (float *)malloc(memSize);
cudaMalloc( (void**)&x_d, memSize );  cudaErr("malloc x_d");
cudaMalloc( (void**)&y_d, memSize ); cudaErr("malloc y_d");
cudaMemset( x_d, 0.0, memSize);cudaErr("memset x_d");
cudaMemset( y_d, 0.0, memSize);cudaErr("memsety_d");
for( int i =0; i<N; i++){x_h[i]=i; y_h[i]=0.0;}
cudaMemcpy( x_d, x_h, memSize, cudaMemcpyHostToDevice); cudaErr("memcpy HtD");
functionG<<< A , B >>>(x_d, y_d); cudaErr("launch functionG");
cudaMemcpy( y_h, y_d, memSize, cudaMemcpyDeviceToHost); cudaErr("memcpy result");
for( int i =0; i<N; i++){printf("%d, %f %f \n",i, x_h[i], y_h[i] ); }    return 0;
}
void __global__ functionG(float *input, float *output)
{
    int idx  = blockIdx.x *blockDim.x+threadIdx.x;
    if ( idx < N) {
        output[idx]= input[idx] + 0.001 * idx;
    }
}
void cudaErr(const char *msg){
    cudaError_t err = cudaGetLastError();
    if( err !=cudaSuccess) {
        printf("%d %s %s \n", err, msg,    cudaGetErrorString(err) );
    }
}

Posted by 원찬식
Big Data2014. 10. 28. 15:55

#include <stdio.h>
#define NN 10
//__device__ float a[10];   /global memory

__global__
void function ( float* input, int size ){
        /* GPU SOURCE CODING HERE!! */
        //int i=0; //define in register. i was define <<<3, 4>>> so make 12EA
        //int j=0;
        int i = blockIdx.x * blockDim.x + threadIdx.x;

        if (size < blockDim.x * gridDim.x){

                input[i] = input[i]*input[i];
                return;
        }else{
                return;
        }
}

int main()
{
        float *a, *b_dev;       //a:CPU , b_dev:GPU
        size_t memSize = sizeof(float) * NN;
        a = (float*) malloc( memSize ); //float a[10]

        //b = (float*) cudaMalloc ( sizeof(float) * 10 );       //This is can't use so use like next line
        cudaMalloc( (void**)&b_dev, memSize * NN );

        cudaMemcpy( b_dev, a, memSize, cudaMemcpyHostToDevice); //Upload

        function <<<3, 4>>> ( b_dev, NN );

        cudaMemcpy (a, b_dev, memSize, cudaMemcpyDeviceToHost); //Download

        printf("ERR MASSAGE :\n%s\n", cudaGetErrorString( cudaGetLastError() ));

        return 0;
}

Posted by 원찬식