![](/uploads/1/2/5/3/125356597/372834659.jpg)
![Contoh Contoh](http://www.netralnews.com/foto/2017/07/10/549-ntb_dapat_jadi_contoh_penerapan_program_ppk_ilustrasisekolahku-718x452.jpg)
Untuk menerapkan cara dibawah perlu anda memiliki aplikasi XAMPP. Program ini memang dijalankan di browser. Contoh penerapan. Membagi sebuah Stack menjadi.
Tutorial Codeigniter - Source Code Aplikasi Toko Online Dengan Framework CodeIgniter, Toko Online Codeigniter, Belajar Codeigniter, Toko Online CI, Codeigniter Ecommerce. Selamat datang kembali sobat semuanya, pada kesempatan kali ini saya akan share source code toko online dengan framework codeigniter.
Source code toko online ini sangat pas bagi sobat yang sedang mengerjakan skripsi atau tugas akhir. Tentunya sobat semua pasti akan senang jika terdapat suatu contoh source code yang dapat digunakan. Source code ini berl isensi Publik Umum GNU, boleh dikembangkan dan disebarluaskan, Berikut ini beberapa Environment recomended yang bagus bagi sobat-sobat semua yang sedang mengerjakan dan menggarap Toko Online dengan framework Codeigniter. - Webserver Apache 2.x (XAMPP 1.7.x) - PHP 5.x (XAMPP 1.7.x) - MySQL 5.x (XAMPP 1.7.x) Technology yang dipakai ialah: - HTML - CSS3 - jQuery - JavaScript - Ajax - PHP - MySQL - Framework CodeIgniter Instalation 1. Copy 'tokoonline' directory to your 'www' or 'htdocs' directory 2. Create database ' tokoonline' or another name you want 3. Import the sql to your database 4.
Ubah file application/config/config.php.example to config.php 5. Ubah file application/config/database.php.example to database.php 6. Konfigurasikan database.php (host,user,password, db) dengan database mysql sobat. Open your browser and type = Done. Silahkan kembangkan lagi source code ini, menjadi yang lebih baik lagi, sekian tutorial kali ini semoga dapat bermanfaat dan bisa dijadikan sebagai referensi dalam mengerjakan skripsi atau tugas akhir. Berikut ini saya selipkan beberapa lampiran file yang dibutuhkan.
In the case of dictation, a grammar can be used to indicate some words that are likely to be spoken. It is not feasible to try and represent the entire spoken English language as a grammar, so the recogniser does its best and uses the grammar to help out. The recogniser tries to use context information from the text to work out which words are more likely than others. At its simplest, the Microsoft SR engine can use a dictation grammar like this: [Grammar] LangID=2057;2057 = $809 = UK English type=dictation. With Command and Control, the permitted words are limited to the supported commands.
Introduction This article looks at adding support for speech capabilities to Microsoft Windows applications written in Delphi, using the Mic. Zz Top Sharp Dressed Man Video 1983 El there.
The grammar defines various rules that dictate what will be said and this makes the recogniser's job much easier. Rather than trying to understand anything spoken, it only needs to recognise speech that follows the supplied rules. A Command and Control grammar is typically referred to as Context-Free Grammar (CFG). A simple CFG that recognises three colours might look like this: [Grammar] LangID=2057;UK English - 2057 = $809 Type=cfg [] = colour red = colour green = colour blue. The first thing you need to do is initialise an audio destination object that will be used by the speech engine object. To make sure things are on the right track the default Wave Mapper is selected as the device to work with.
Uses Speech, MMSystem. TfrmDirectTTSAPI = class(TForm). AMMD: IAudioMultiMediaDevice. Procedure TfrmDirectTTSAPI.FormCreate(Sender: TObject); begin SendMessage(lstProgress.Handle, LB_SETHORIZONTALEXTENT, Width, 0); Log( 'About to connect to multimedia device'); AMMD:= CreateComObject(CLSID_MMAudioDest) as IAudioMultiMediaDevice; OleCheck(AMMD.DeviceNumSet(WAVE_MAPPER)); Log( 'Connected to multimedia device'). End; Engine Enumerator Object. This sample application lists all supported modes in a combobox and lets the user select a mode to use.
The combobox Items property stores the textual mode names in its Stringsarray and pointers to the corresponding mode records in the Objects array. As different modes are selected a listbox is used to display the mode attributes stored in the mode record.
This way, the user can make an informed decision about which voice mode to use. Type TfrmDirectTTSAPI = class(TForm).
TTSEnum: ITTSEnum. Procedure TfrmDirectTTSAPI.FormCreate(Sender: TObject); var ModeInfo: TTSModeInfo; PModeInfo: PTTSModeInfo; NumFound: DWord; begin. This code gets access to both interfaces and checks the current voice pitch, speed and volume (these details are displayed in track bars on the form). TTSDialogs: ITTSDialogs; TTSAttrs: ITTSAttributes. As with the Voice Text API there are different calls to start speech and to continue paused speech, so the same approach of using a helper flag has been employed.
The text to speak is taken from a richedit control. You can see that a TSData record is necessary to represent the text to be added to the speech queue. Procedure TfrmDirectTTSAPI.btnPlayClick(Sender: TObject); var SData: TSData; begin if not BeenPaused then begin SData.dwSize:= Succ(Length(reText.Text)); SData.pData:= PChar(reText.Text); OleCheck(TTSCentral.TextData(CHARSET_TEXT, 0, SData, Pointer(BufferSink), ITTSBufNotifySink)); end else begin OleCheck(TTSCentral.AudioResume); BeenPaused:= False end end; procedure TfrmDirectTTSAPI.btnPauseClick(Sender: TObject); begin OleCheck(TTSCentral.AudioPause); BeenPaused:= True end; procedure TfrmDirectTTSAPI.btnStopClick(Sender: TObject); begin OleCheck(TTSCentral.AudioReset); end. As text is added to the speech buffer (when you call TextData) a reference to an object that implements the buffer notification interface ( ITTSBufNotifySink) is passed along, as you can see in the call. This object is optionally created through one of the check boxes on the form and logs details of buffer notification methods, which are: • TextDataStarted: the buffer data has started being processed • TextDataDone: the buffer has been emptied and all text has been sent to the audio device • BookMark: a bookmark tag has been encountered • WordPosition: a new word is being processed.
Procedure TfrmDirectTTSAPI.SetWordStyle(FirstChar: Integer; Styles: TFontStyles); var WordLen: Integer; begin with reText do begin WordLen:= 1; while FirstChar + WordLen ' then FForm.memEnginePhonemes.Text:= FForm.memEnginePhonemes.Text + #32; end; DirectSpeechRecognition API. The first thing you need to do is initialise an audio destination object that will be used by the speech engine object, telling it which audio device to use. Uses Speech, MMSyetem. TfrmDirectSRAPI = class(TForm). AMMD: IAudioMultiMediaDevice.
Procedure TfrmDirectSRAPI.FormCreate(Sender: TObject); begin SendMessage(lstProgress.Handle, LB_SETHORIZONTALEXTENT, Width, 0); Log( 'About to connect to multimedia device'); AMMD:= CreateComObject(CLSID_MMAudioSource) as IAudioMultiMediaDevice; OleCheck(AMMD.DeviceNumSet(WAVE_MAPPER)); Log( 'Connected to multimedia device'). End; Engine Enumerator Object. This sample application lists all supported modes in a combobox and lets the user browse the available modes, but automatically selects the first mode to use. The combobox Itemsproperty stores the textual mode names in the Strings array and pointers to the corresponding mode records in the Objects array. As different modes are selected a listbox is used to display the mode attributes stored in the mode record. This way, the user can see the attributes of the various modes on offer.
Type TfrmDirectSRAPI = class(TForm). SREnum: ISREnum. Procedure TfrmDirectSRAPI.FormCreate(Sender: TObject); var ModeInfo: TSRModeInfo; NumFound: DWord; begin. Log( 'About to enumerate speech engines'); SREnum:= CreateComObject(CLSID_SREnumerator) as ISREnum; OleCheck(SREnum.Reset); OleCheck(SREnum.Next(1, ModeInfo, @NumFound)); while NumFound >0 do begin New(PModeInfo); PModeInfo^:= ModeInfo; cbEngines.Items.AddObject( String(ModeInfo.szModeName), TObject(PModeInfo)); OleCheck(SREnum.Next(1, ModeInfo, @NumFound)); end; if cbEngines.Items.Count >0 then begin cbEngines.ItemIndex:= 0; //Select 1st engine cbEngines.OnChange(cbEngines); //& ensure OnChange triggers end; Log( 'Enumerated speech engines').
End; procedure TfrmDirectSRAPI.FormDestroy(Sender: TObject); var I: Integer; begin. The engine object also (probably) implements the ISRDialogs interface, which allows access to the standard SR engine dialogs and the ISRAttributes interface, which allows you to check on the SR attributes. You can either use the Supports function to see if these interfaces are supported, or check the TSRModeInfo.dwInterfaces mask for the SRI_ISRATTRIBUTES or SRI_ISRDIALOGS flags. SRDialogs: ISRDialogs; SRAttrs: ISRAttributes.
Log( 'About to make dialogs available'); if PModeInfo.dwInterfaces and SRI_ISRDIALOGS >0 then begin SRDialogs:= SRCentral as ISRDialogs; Log( 'Dialogs are available'); end else Log( 'Dialogs are not supported'); Log( 'About to make speech attributes available'); if PModeInfo.dwInterfaces and SRI_ISRATTRIBUTES >0 then begin SRAttrs:= SRCentral as ISRAttributes; Log( 'Attributes available'); end else Log( 'Attributes interface not supported'); Grammar Compiler. Next a grammar compiler object is used to take a grammar definition (a simple dictation grammar) and compile it.
This compiled grammar will be passed along to the SR engine shortly. Some brief is given towards the start of this article and you can find more information in the SAPI documentation. STGramComp: ISTGramComp. Const Grammar: PChar = '[Grammar]'#13 'LangID=2057'#13 'Type=dictation'#13.
//Load grammar STGramComp:= CreateComObject(CLSID_STGramComp) as ISTGramComp; OleCheck(STGramComp.FromMemory(Grammar, Succ(StrLen(Grammar)))); //Compile grammar OleCheck(STGramComp.Compile(PPWideChar( nil)^, nil)). The grammar compiler can generate messages indicating if anything was wrong with the grammar (or just that the grammar compiled successfully). If you wish to see the error message you can change the code to: uses ActiveX.
STGramComp: ISTGramComp; Size: DWord; GramErr: PWideChar. Const Grammar: PChar = '[Grammar]'#13 'LangID=2057'#13 'Type=dictation'#13. //Load grammar STGramComp:= CreateComObject(CLSID_STGramComp) as ISTGramComp; OleCheck(STGramComp.FromMemory(Grammar, Succ(StrLen(Grammar)))); //Compile grammar GramErr:= nil; OleCheck(STGramComp.Compile(GramErr, @Size)); if Assigned(GramErr) then begin ShowMessage(WideString(GramErr)); CoTaskMemFree(GramErr); end. Grammar And Status Notifications. The grammar compiler can now load the compiled grammar into the engine object and, whilst doing so can set up a notification object that receives recognition-related notifications (from the ISRGramNotifySink interface).
These include the PhraseStart, PhraseHypothesis and PhraseFinish notifications (among others) that we saw in the. SRGramUnk: IUnknown; SRNotifySink: ISRNotifySink. SRGramNotifySink:= TSRGramNotifySink.Create(Self); OleCheck(STGramComp.GrammarLoad(SRCentral, Pointer(SRGramNotifySink), ISRGramNotifySink, SRGramUnk)). The control manages a list of all modes (the number is given by the CountEngines property and the active one is in CurrentMode). When the combobox is populated the Stringsproperty of the TStrings property Items is filled with the descriptive mode names, whereas the Objects property is simply filled with the mode index. If you get issues of SR stopping (or not starting) unexpectedly, or other weird SR issues, check your recording settings have the microphone enabled. • Double-click the Volume icon in your Task Bar's System Tray.
If no Volume icon is present, choose Start Programs Accessories Entertainment Volume Control. • If you see a Microphone column, ensure it has its Mute checkbox checked • Choose Options Properties, click Recording, ensure the Microphone option is checked and press OK.
• Now ensure the Microphone column has its Select checkbox enabled, if it has one, or that its Mute checkbox is unchecked, if it has one. SAPI 4 Deployment.
![](/uploads/1/2/5/3/125356597/372834659.jpg)