




版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
PromptEngineering
Author:LeeBoonstra
PromptEngineering
Acknowledgements
Contentcontributors
MichaelShermanYuanCao
ErickArmbrustAnantNawalgariaAntonioGulliSimoneCammel
CuratorsandEditors
AntonioGulliAnantNawalgariaGraceMollison
TechnicalWriter
JoeyHaymaker
Designer
MichaelLanning
February2025 2
Tableofcontents
Introduction 6
Promptengineering 7
LLMoutputconfiguration 8
Outputlength 8
Samplingcontrols 9
Temperature 9
Top-Kandtop-P 10
Puttingitalltogether 11
Promptingtechniques 13
Generalprompting/zeroshot 13
One-shot&few-shot 15
System,contextualandroleprompting 18
Systemprompting 19
Roleprompting 21
Contextualprompting 23
Step-backprompting 25
ChainofThought(CoT) 29
Self-consistency 32
TreeofThoughts(ToT) 36
ReAct(reason&act) 37
AutomaticPromptEngineering 40
Codeprompting 42
Promptsforwritingcode 42
Promptsforexplainingcode 44
Promptsfortranslatingcode 46
Promptsfordebuggingandreviewingcode 48
Whataboutmultimodalprompting? 54
BestPractices 54
Provideexamples 54
Designwithsimplicity 55
Bespecificabouttheoutput 56
UseInstructionsoverConstraints 56
Controlthemaxtokenlength 58
Usevariablesinprompts 58
Experimentwithinputformatsandwritingstyles 59
Forfew-shotpromptingwithclassificationtasks,mixuptheclasses 59
Adapttomodelupdates 60
Experimentwithoutputformats 60
JSONRepair 61
WorkingwithSchemas 62
Experimenttogetherwithotherpromptengineers 63
CoTBestpractices 64
Documentthevariouspromptattempts 64
Summary 66
Endnotes 68
PromptEngineering
February2025
PAGE
20
Youdon’tneedtobeadatascientistoramachinelearningengineer–everyonecanwriteaprompt.
Introduction
Whenthinkingaboutalargelanguagemodelinputandoutput,atextprompt(sometimesaccompaniedbyothermodalitiessuchasimageprompts)istheinputthemodeluses
topredictaspecificoutput.Youdon’tneedtobeadatascientistoramachinelearningengineer–everyonecanwriteaprompt.However,craftingthemosteffectivepromptcanbecomplicated.Manyaspectsofyourpromptaffectitsefficacy:themodelyouuse,themodel’strainingdata,themodelconfigurations,yourword-choice,styleandtone,structure,andcontextallmatter.Therefore,promptengineeringisaniterativeprocess.Inadequatepromptscanleadtoambiguous,inaccurateresponses,andcanhinderthemodel’sabilitytoprovidemeaningfuloutput.
WhenyouchatwiththeGeminichatbot,1youbasicallywriteprompts,howeverthiswhitepaperfocusesonwritingpromptsfortheGeminimodelwithinVertexAIorbyusingtheAPI,becausebypromptingthemodeldirectlyyouwillhaveaccesstotheconfigurationsuchastemperatureetc.
Thiswhitepaperdiscussespromptengineeringindetail.Wewilllookintothevariouspromptingtechniquestohelpyougettingstartedandsharetipsandbestpracticestobecomeapromptingexpert.Wewillalsodiscusssomeofthechallengesyoucanfacewhilecraftingprompts.
Promptengineering
RememberhowanLLMworks;it’sapredictionengine.Themodeltakessequentialtextasaninputandthenpredictswhatthefollowingtokenshouldbe,basedonthedataitwastrainedon.TheLLMisoperationalizedtodothisoverandoveragain,addingthepreviouslypredictedtokentotheendofthesequentialtextforpredictingthefollowingtoken.Thenext
tokenpredictionisbasedontherelationshipbetweenwhat’sintheprevioustokensandwhattheLLMhasseenduringitstraining.
Whenyouwriteaprompt,youareattemptingtosetuptheLLMtopredicttherightsequenceoftokens.Promptengineeringistheprocessofdesigninghigh-qualitypromptsthatguideLLMstoproduceaccurateoutputs.Thisprocessinvolvestinkeringtofindthebestprompt,optimizingpromptlength,andevaluatingaprompt’swritingstyleandstructureinrelation
tothetask.InthecontextofnaturallanguageprocessingandLLMs,apromptisaninputprovidedtothemodeltogeneratearesponseorprediction.
Thesepromptscanbeusedtoachievevariouskindsofunderstandingandgenerationtaskssuchastextsummarization,informationextraction,questionandanswering,textclassification,languageorcodetranslation,codegeneration,andcodedocumentationorreasoning.
PleasefeelfreetorefertoGoogle’spromptingguides2,3withsimpleandeffectivepromptingexamples.
Whenpromptengineering,youwillstartbychoosingamodel.Promptsmightneedtobeoptimizedforyourspecificmodel,regardlessofwhetheryouuseGeminilanguagemodelsinVertexAI,GPT,Claude,oranopensourcemodellikeGemmaorLLaMA.
Besidestheprompt,youwillalsoneedtotinkerwiththevariousconfigurationsofaLLM.
LLMoutputconfiguration
Onceyouchooseyourmodelyouwillneedtofigureoutthemodelconfiguration.MostLLMscomewithvariousconfigurationoptionsthatcontroltheLLM’soutput.Effectivepromptengineeringrequiressettingtheseconfigurationsoptimallyforyourtask.
Outputlength
Animportantconfigurationsettingisthenumberoftokenstogenerateinaresponse.GeneratingmoretokensrequiresmorecomputationfromtheLLM,leadingtohigherenergyconsumption,potentiallyslowerresponsetimes,andhighercosts.
ReducingtheoutputlengthoftheLLMdoesn’tcausetheLLMtobecomemorestylisticallyortextuallysuccinctintheoutputitcreates,itjustcausestheLLMtostoppredictingmoretokensoncethelimitisreached.Ifyourneedsrequireashortoutputlength,you’llalsopossiblyneedtoengineeryourprompttoaccommodate.
OutputlengthrestrictionisespeciallyimportantforsomeLLMpromptingtechniques,likeReAct,wheretheLLMwillkeepemittinguselesstokensaftertheresponseyouwant.
Beaware,generatingmoretokensrequiresmorecomputationfromtheLLM,leadingtohigherenergyconsumptionandpotentiallyslowerresponsetimes,whichleadstohighercosts.
Samplingcontrols
LLMsdonotformallypredictasingletoken.Rather,LLMspredictprobabilitiesforwhatthenexttokencouldbe,witheachtokenintheLLM’svocabularygettingaprobability.Thosetokenprobabilitiesarethensampledtodeterminewhatthenextproducedtokenwillbe.
Temperature,top-K,andtop-Parethemostcommonconfigurationsettingsthatdeterminehowpredictedtokenprobabilitiesareprocessedtochooseasingleoutputtoken.
Temperature
Temperaturecontrolsthedegreeofrandomnessintokenselection.Lowertemperaturesaregoodforpromptsthatexpectamoredeterministicresponse,whilehighertemperaturescanleadtomorediverseorunexpectedresults.Atemperatureof0(greedydecoding)is
deterministic:thehighestprobabilitytokenisalwaysselected(thoughnotethatiftwotokenshavethesamehighestpredictedprobability,dependingonhowtiebreakingisimplementedyoumaynotalwaysgetthesameoutputwithtemperature0).
Temperaturesclosetothemaxtendtocreatemorerandomoutput.Andastemperaturegetshigherandhigher,alltokensbecomeequallylikelytobethenextpredictedtoken.
TheGeminitemperaturecontrolcanbeunderstoodinasimilarwaytothesoftmaxfunctionusedinmachinelearning.Alowtemperaturesettingmirrorsalowsoftmaxtemperature(T),emphasizingasingle,preferredtemperaturewithhighcertainty.AhigherGeminitemperaturesettingislikeahighsoftmaxtemperature,makingawiderrangeoftemperaturesaround
theselectedsettingmoreacceptable.Thisincreaseduncertaintyaccommodatesscenarioswherearigid,precisetemperaturemaynotbeessentiallikeforexamplewhenexperimentingwithcreativeoutputs.
Top-Kandtop-P
Top-Kandtop-P(alsoknownasnucleussampling)4aretwosamplingsettingsusedinLLMstorestrictthepredictednexttokentocomefromtokenswiththetoppredictedprobabilities.Liketemperature,thesesamplingsettingscontroltherandomnessanddiversityofgeneratedtext.
Top-KsamplingselectsthetopKmostlikelytokensfromthemodel’spredicteddistribution.Thehighertop-K,themorecreativeandvariedthemodel’soutput;thelowertop-K,themorerestiveandfactualthemodel’soutput.Atop-Kof1isequivalenttogreedydecoding.
Top-Psamplingselectsthetoptokenswhosecumulativeprobabilitydoesnotexceedacertainvalue(P).ValuesforPrangefrom0(greedydecoding)to1(alltokensintheLLM’svocabulary).
Thebestwaytochoosebetweentop-Kandtop-Pistoexperimentwithbothmethods(orbothtogether)andseewhichoneproducestheresultsyouarelookingfor.
Puttingitalltogether
Choosingbetweentop-K,top-P,temperature,andthenumberoftokenstogenerate,dependsonthespecificapplicationanddesiredoutcome,andthesettingsallimpactoneanother.It’salsoimportanttomakesureyouunderstandhowyourchosenmodelcombinesthedifferentsamplingsettingstogether.
Iftemperature,top-K,andtop-Pareallavailable(asinVertexStudio),tokensthatmeetboththetop-Kandtop-Pcriteriaarecandidatesforthenextpredictedtoken,andthen
temperatureisappliedtosamplefromthetokensthatpassedthetop-Kandtop-Pcriteria.Ifonlytop-Kortop-Pisavailable,thebehavioristhesamebutonlytheonetop-KorPsettingisused.
Iftemperatureisnotavailable,whatevertokensmeetthetop-Kand/ortop-Pcriteriaarethenrandomlyselectedfromtoproduceasinglenextpredictedtoken.
Atextremesettingsofonesamplingconfigurationvalue,thatonesamplingsettingeithercancelsoutotherconfigurationsettingsorbecomesirrelevant.
Ifyousettemperatureto0,top-Kandtop-Pbecomeirrelevant–themostprobabletokenbecomesthenexttokenpredicted.Ifyousettemperatureextremelyhigh(above1–generallyintothe10s),temperaturebecomesirrelevantandwhatevertokensmake
itthroughthetop-Kand/ortop-Pcriteriaarethenrandomlysampledtochooseanextpredictedtoken.
Ifyousettop-Kto1,temperatureandtop-Pbecomeirrelevant.Onlyonetokenpassesthetop-Kcriteria,andthattokenisthenextpredictedtoken.Ifyousettop-Kextremelyhigh,liketothesizeoftheLLM’svocabulary,anytokenwithanonzeroprobabilityofbeingthenexttokenwillmeetthetop-Kcriteriaandnoneareselectedout.
Ifyousettop-Pto0(oraverysmallvalue),mostLLMsamplingimplementationswillthenonlyconsiderthemostprobabletokentomeetthetop-Pcriteria,makingtemperatureandtop-Kirrelevant.Ifyousettop-Pto1,anytokenwithanonzeroprobabilityofbeingthenexttokenwillmeetthetop-Pcriteria,andnoneareselectedout.
Asageneralstartingpoint,atemperatureof.2,top-Pof.95,andtop-Kof30willgiveyourelativelycoherentresultsthatcanbecreativebutnotexcessivelyso.Ifyouwantespeciallycreativeresults,trystartingwithatemperatureof.9,top-Pof.99,andtop-Kof40.Andifyouwantlesscreativeresults,trystartingwithatemperatureof.1,top-Pof.9,andtop-Kof20.
Finally,ifyourtaskalwayshasasinglecorrectanswer(e.g.,answeringamathproblem),startwithatemperatureof0.
NOTE:Withmorefreedom(highertemperature,top-K,top-P,andoutputtokens),theLLMmightgeneratetextthatislessrelevant.
WARNING:Haveyoueverseenaresponseendingwithalargeamountoffillerwords?Thisisalsoknownasthe"repetitionloopbug",whichisacommonissueinLargeLanguageModelswherethemodelgetsstuckinacycle,repeatedlygeneratingthesame(filler)word,phrase,orsentencestructure,oftenexacerbatedbyinappropriatetemperatureandtop-k/
top-psettings.Thiscanoccuratbothlowandhightemperaturesettings,thoughfordifferentreasons.Atlowtemperatures,themodelbecomesoverlydeterministic,stickingrigidlytothehighestprobabilitypath,whichcanleadtoaloopifthatpathrevisitspreviouslygeneratedtext.Conversely,athightemperatures,themodel'soutputbecomesexcessivelyrandom,increasingtheprobabilitythatarandomlychosenwordorphrasewill,bychance,leadbacktoapriorstate,creatingaloopduetothevastnumberofavailableoptions.Inbothcases,themodel'ssamplingprocessgets"stuck,"resultinginmonotonousandunhelpfuloutputuntiltheoutputwindowisfilled.Solvingthisoftenrequirescarefultinkeringwithtemperatureandtop-k/top-pvaluestofindtheoptimalbalancebetweendeterminismandrandomness.
Promptingtechniques
LLMsaretunedtofollowinstructionsandaretrainedonlargeamountsofdatasotheycanunderstandapromptandgenerateananswer.ButLLMsaren’tperfect;thecleareryourprompttext,thebetteritisfortheLLMtopredictthenextlikelytext.Additionally,specifictechniquesthattakeadvantageofhowLLMsaretrainedandhowLLMsworkwillhelpyougettherelevantresultsfromLLMs
Nowthatweunderstandwhatpromptengineeringisandwhatittakes,let’sdiveintosomeexamplesofthemostimportantpromptingtechniques.
Generalprompting/zeroshot
Azero-shot5promptisthesimplesttypeofprompt.ItonlyprovidesadescriptionofataskandsometextfortheLLMtogetstartedwith.Thisinputcouldbeanything:aquestion,astartofastory,orinstructions.Thenamezero-shotstandsfor’noexamples’.
Let’suseVertexAIStudio(forLanguage)inVertexAI,6whichprovidesaplaygroundtotestprompts.InTable1,youwillseeanexamplezero-shotprompttoclassifymoviereviews.
Thetableformatasusedbelowisagreatwayofdocumentingprompts.Yourpromptswilllikelygothroughmanyiterationsbeforetheyendupinacodebase,soit’simportanttokeeptrackofyourpromptengineeringworkinadisciplined,structuredway.Moreonthistableformat,theimportanceoftrackingpromptengineeringwork,andthepromptdevelopmentprocessisintheBestPracticessectionlaterinthischapter(“Documentthevariouspromptattempts”).
Themodeltemperatureshouldbesettoalownumber,sincenocreativityisneeded,andweusethegemini-prodefaulttop-Kandtop-Pvalues,whicheffectivelydisablebothsettings(see‘LLMOutputConfiguration’above).Payattentiontothegeneratedoutput.Thewordsdisturbingandmasterpieceshouldmakethepredictionalittlemorecomplicated,asbothwordsareusedinthesamesentence.
NameGoalModel
TemperatureTop-KPrompt
1_1_movie_classification
Classifymoviereviewsaspositive,neutralornegative.gemini-pro
0.1 TokenLimit 5
N/A Top-P 1
ClassifymoviereviewsasPOSITIVE,NEUTRALorNEGATIVE.Review:"Her"isadisturbingstudyrevealingthedirectionhumanityisheadedifAIisallowedtokeepevolving,unchecked.Iwishthereweremoremovieslikethismasterpiece.Sentiment:
Output POSITIVE
Table1.Anexampleofzero-shotprompting
Whenzero-shotdoesn’twork,youcanprovidedemonstrationsorexamplesintheprompt,whichleadsto“one-shot”and“few-shot”prompting.Generalprompting/zeroshot
One-shot&few-shot
WhencreatingpromptsforAImodels,itishelpfultoprovideexamples.Theseexamplescanhelpthemodelunderstandwhatyouareaskingfor.Examplesareespeciallyusefulwhenyouwanttosteerthemodeltoacertainoutputstructureorpattern.
Aone-shotprompt,providesasingleexample,hencethenameone-shot.Theideaisthemodelhasanexampleitcanimitatetobestcompletethetask.
Afew-shotprompt7providesmultipleexamplestothemodel.Thisapproachshowsthemodelapatternthatitneedstofollow.Theideaissimilartoone-shot,butmultipleexamplesofthedesiredpatternincreasesthechancethemodelfollowsthepattern.
Thenumberofexamplesyouneedforfew-shotpromptingdependsonafewfactors,includingthecomplexityofthetask,thequalityoftheexamples,andthecapabilitiesofthegenerativeAI(genAI)modelyouareusing.Asageneralruleofthumb,youshoulduseatleastthreetofiveexamplesforfew-shotprompting.However,youmayneedtousemoreexamplesformorecomplextasks,oryoumayneedtousefewerduetotheinputlengthlimitationofyourmodel.
Table2showsafew-shotpromptexample,let’susethesamegemini-promodelconfigurationsettingsasbefore,otherthanincreasingthetokenlimittoaccommodatetheneedforalongerresponse.
Goal
ParsepizzaorderstoJSON
Prompt
Parseacustomer'spizzaorderintovalidJSON:
EXAMPLE:
Iwantasmallpizzawithcheese,tomatosauce,andpepperoni.JSONResponse:
```
{
"size":"small",
"type":"normal",
"ingredients":[["cheese","tomatosauce","peperoni"]]
}
```
Continuesnextpage...
1
Top-P
N/A
Top-K
250
TokenLimit
0.1
Temperature
gemini-pro
Model
Prompt
EXAMPLE:
CanIgetalargepizzawithtomatosauce,basilandmozzarella
{
"size":"large",
"type":"normal",
"ingredients":[["tomatosauce","bazel","mozzarella"]]
}
Now,Iwouldlikealargepizza,withthefirsthalfcheeseandmozzarella.Andtheothertomatosauce,hamandpineapple.
JSONResponse:
Output
```
{
"size":"large",
"type":"half-half",
"ingredients":[["cheese","mozzarella"],["tomatosauce","ham","pineapple"]]
}
```
Table2.Anexampleoffew-shotprompting
Whenyouchooseexamplesforyourprompt,useexamplesthatarerelevanttothetaskyouwanttoperform.Theexamplesshouldbediverse,ofhighquality,andwellwritten.Onesmallmistakecanconfusethemodelandwillresultinundesiredoutput.
Ifyouaretryingtogenerateoutputthatisrobusttoavarietyofinputs,thenitisimportanttoincludeedgecasesinyourexamples.Edgecasesareinputsthatareunusualorunexpected,butthatthemodelshouldstillbeabletohandle.
System,contextualandroleprompting
System,contextualandrolepromptingarealltechniquesusedtoguidehowLLMsgeneratetext,buttheyfocusondifferentaspects:
Systempromptingsetstheoverallcontextandpurposeforthelanguagemodel.Itdefinesthe‘bigpicture’ofwhatthemodelshouldbedoing,liketranslatingalanguage,classifyingareviewetc.
Contextualpromptingprovidesspecificdetailsorbackgroundinformationrelevanttothecurrentconversationortask.Ithelpsthemodeltounderstandthenuancesofwhat’sbeingaskedandtailortheresponseaccordingly.
Rolepromptingassignsaspecificcharacteroridentityforthelanguagemodeltoadopt.Thishelpsthemodelgenerateresponsesthatareconsistentwiththeassignedroleanditsassociatedknowledgeandbehavior.
Therecanbeconsiderableoverlapbetweensystem,contextual,androleprompting.E.g.apromptthatassignsaroletothesystem,canalsohaveacontext.
However,eachtypeofpromptservesaslightlydifferentprimarypurpose:
Systemprompt:Definesthemodel’sfundamentalcapabilitiesandoverarchingpurpose.
Contextualprompt:Providesimmediate,task-specificinformationtoguidetheresponse.It’shighlyspecifictothecurrenttaskorinput,whichisdynamic.
Roleprompt:Framesthemodel’soutputstyleandvoice.Itaddsalayerofspecificityandpersonality.
Distinguishingbetweensystem,contextual,androlepromptsprovidesaframeworkfordesigningpromptswithclearintent,allowingforflexiblecombinationsandmakingiteasiertoanalyzehoweachprompttypeinfluencesthelanguagemodel’soutput.
Let’sdiveintothesethreedifferentkindsofprompts.
Systemprompting
Table3containsasystemprompt,whereIspecifyadditionalinformationonhowtoreturntheoutput.Iincreasedthetemperaturetogetahighercreativitylevel,andIspecifiedahighertokenlimit.However,becauseofmyclearinstructiononhowtoreturntheoutputthemodeldidn’treturnextratext.
Goal
Classifymoviereviewsaspositive,neutralornegative.
Output
NEGATIVE
Classifymoviereviewsaspositive,neutralornegative.Onlyreturnthelabelinuppercase.
Review:"Her"isadisturbingstudyrevealingthedirectionhumanityisheadedifAIisallowedtokeepevolving,unchecked.It'ssodisturbingIcouldn'twatchit.
Sentiment:
Prompt
0.8
Top-P
40
Top-K
5
TokenLimit
1
Temperature
gemini-pro
Model
Table3.Anexampleofsystemprompting
Systempromptscanbeusefulforgeneratingoutputthatmeetsspecificrequirements.Thename‘systemprompt’actuallystandsfor‘providinganadditionaltasktothesystem’.Forexample,youcoulduseasystemprompttogenerateacodesnippetthatiscompatiblewithaspecificprogramminglanguage,oryoucoulduseasystemprompttoreturnacertainstructure.HavealookintoTable4,whereIreturntheoutputinJSONformat.
Goal
Classifymoviereviewsaspositive,neutralornegative,returnJSON.
Output
```
{
"movie_reviews":[
{
"sentiment":"NEGATIVE","name":"Her"
}
]
}
```
Classifymoviereviewsaspositive,neutralornegative.ReturnvalidJSON:
Review:"Her"isadisturbingstudyrevealingthedirectionhumanityisheadedifAIisallowedtokeepevolving,unchecked.It'ssodisturbingIcouldn'twatchit.
Schema:
```MOVIE:
{
"sentiment":String"POSITIVE"|"NEGATIVE"|"NEUTRAL","name":String
}
MOVIEREVIEWS:
{
"movie_reviews":[MOVIE]
}
```
JSONResponse:
Prompt
0.8
Top-P
40
Top-K
1024
TokenLimit
1
Temperature
gemini-pro
Model
Table4.AnexampleofsystempromptingwithJSONformat
TherearesomebenefitsinreturningJSONobjectsfromapromptthatextractsdata.Inareal-worldapplicationIdon’tneedtomanuallycreatethisJSONformat,Icanalready
returnthedatainasortedorder(veryhandywhenworkingwithdatetimeobjects),butmostimportantly,bypromptingforaJSONformatitforcesthemodeltocreateastructureandlimithallucinations.
Systempromptscanalsobereallyusefulforsafetyandtoxicity.Tocontroltheoutput,simplyaddanadditionallinetoyourpromptlike:‘Youshouldberespectfulinyouranswer.’.
Roleprompting
RolepromptingisatechniqueinpromptengineeringthatinvolvesassigningaspecificroletothegenAImodel.Thiscanhelpthemodeltogeneratemorerelevantandinformativeoutput,asthemodelcancraftitsresponsestothespecificrolethatithasbeenassigned.
Forexample,youcouldrolepromptagenAImodeltobeabookeditor,akindergartenteacher,oramotivationalspeaker.Oncethemodelhasbeenassignedarole,youcanthengiveitpromptsthatarespecifictothatrole.Forexample,youcouldpromptateachertocreatealessonplanthattheycanthenreview.HavealookintoTable5,wherethemodeltakestheroleofatravelguide.
GoalModel
TemperatureTop-KPrompt
Actastravelguideandprovide3travelsuggestionsgemini-pro
1 TokenLimit 1024
40 Top-P 0.8
Iwantyoutoactasatravelguide.Iwillwritetoyouaboutmylocationandyouwillsuggest3placestovisitnearme.Insomecases,IwillalsogiveyouthetypeofplacesIwillvisit.
Mysuggestion:"IaminAmsterdamandIwanttovisitonlymuseums."
TravelSuggestions:
Output
Rijksmuseum:World-renownedmuseum,knownforDutchmasterpieceslikeRembrandt’s“TheNightWatch.”Don’tmisstheincrediblecollectionofpaintingsandartifactsshowcasingDutchartandhistory.
VanGoghMuseum:ImmerseyourselfinthecaptivatingworksofVincentvanGogh.Themuseumhousesthelargestcollectionofhispaintingsanddrawings,including“TheStarryNight”and“Sunflowers.”
StedelijkMuseumAmsterdam:Discovermodernandcontemporaryartfromaroundtheworld.Themuseum’scollectionincludesworksbyPicasso,Kandinsky,andRietveld,housedinastrikingmodernbuilding.
Table5.Anexampleofroleprompting
Theaboveexampleshowsanexampleoftakingtheroleofatravelagent.Whenyouwouldchangetheroletoateacherofgeography,youwouldnoticethatyouwillreceiveadifferentresponse.
DefiningaroleperspectiveforanAImodelgivesitablueprintofthetone,style,andfocusedexpertiseyou’relookingfortoimprovethequality,relevance,andeffectivenessofyouroutput.
HerearesomestylesyoucanchoosefromwhichIfindeffective:
Confrontational,Descriptive,Direct,Formal,Humorous,Influential,Informal,Inspirational,Persuasive
Let’schangeourpromptinTable6toincludeahumorousandinspirationalstyle.
Prompt
Iwantyoutoactasatravelguide.Iwillwritetoyouaboutmylocationandyouwillsuggest3placestovisitnearmeinahumorousstyle.
Mysuggestion:"IaminManhattan."TravelSuggestions:
Output
BeholdtheEmpireStateofMind:AscendtothedizzyingheightsoftheEmpireStateBuildingandbaskinthegloryofManhattan'sskyline.PreparetofeellikeKingKongatoptheBigApple,minusthegiantape-sizedbanana.
GetArtsy-FartsyatMoMA:UnleashyourinnerartaficionadoattheMuseumofModernArt(MoMA).Gazeuponmasterpiecesthatwillboggleyour
mindandmakeyouquestionwhetheryourstick-figuredrawingshaveanyartisticmerit.
Shop‘TilYouDroponFifthAvenue:IndulgeinsomeretailtherapyontheiconicFifthAvenue.Braceyourselfforstickershockasyouwindow-shop(oractuallyshop)atdesignerboutiquesthatwillmakeyourwalletcry.Buthey,you’reinManhattan,soyoumightaswellembracethefabulousness!
Table6.Anexampleofrolepromptingwithahumoroustoneandstyle
Contextualprompting
Byprovidingcontextualprompts,youcanhelpensurethatyourAIinteractionsareasseamlessandefficientaspossible.Themodelwillbeabletomorequicklyunderstandyourrequestandbeabletogeneratemoreaccurateandrelevantresponses,asyoucanseeintheexampleofTable7.
GoalModel
TemperatureTop-KPrompt
Output
Suggestarticlesforablogaboutretrogamesgemini-pro
1 TokenLimit 1024
40 Top-P 0.8
Context:Youarewritingforablogaboutretro80'sarcadevideogames.
Suggest3topicstowriteanarticleaboutwithafewlinesofdescriptionofwhatthisarticleshouldcontain.
**TheEvolutionofArcadeCabinetDesign:**
Thisarticlewouldexploretheevolutionofarcadecabinetdesigns,fromtheearlywoodandmetalcabinetsofthe1970stothesleek,neon-litdesignsofthe1980sandbeyond.
**BlastFromThePast:IconicArcadeGamesofThe80’s:**
Thisarticlewouldfeatureaglimpseintothehistory,impact,andlegacyofsomeofthemosticonicandinfluentialarcadegamesofthe1980s,highlightingtheirinnovations,popularmechanics,andenduringcharm.
**TheRiseandRetroRevivalofPixelArt:**
Thisarticlewoulddelveintotheevolutionofpixelartasadefiningvisualstyleofearlyarcadegames,tracingitsroots,exploringitstechniques,anddiscussingtheresurgenceofpixelartinmoderngamesanddigitalart.
Table7.Anexampleofcontextualprom
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 法學(xué)概論考試中法治思維的重要性及試題及答案
- 2025年夜間餐飲消費(fèi)市場(chǎng)餐飲業(yè)餐飲服務(wù)與市場(chǎng)潛力研究報(bào)告
- 車牌轉(zhuǎn)租協(xié)議書
- 討論社會(huì)變革的2023高考作文試題及答案
- 蜂蜜收購協(xié)議書
- 企業(yè)戰(zhàn)略實(shí)施教學(xué)案例試題及答案
- 信息處理在現(xiàn)代企業(yè)中的應(yīng)用試題及答案
- 補(bǔ)課承諾協(xié)議書
- 金子購買協(xié)議書
- 違約就業(yè)協(xié)議書
- GB/T 13663.1-2017給水用聚乙烯(PE)管道系統(tǒng)第1部分:總則
- GB 2725.1-1994肉灌腸衛(wèi)生標(biāo)準(zhǔn)
- 受處分以來的思想工作生活情況【4篇】
- 課件:第四章 社會(huì)工作項(xiàng)目的執(zhí)行(《社會(huì)工作項(xiàng)目策劃與評(píng)估》課程)
- 冷庫施工組織設(shè)計(jì)施工方案
- 登桿作業(yè)課件共
- 吸痰技能操作及評(píng)分標(biāo)準(zhǔn)(評(píng)分表)
- 尼可地爾調(diào)研
- 發(fā)酵法生物制氫技術(shù)課件
- 機(jī)械制造技術(shù)基礎(chǔ)(第7章完成)課件
- 主動(dòng)脈夾層護(hù)理查房-PPT課件
評(píng)論
0/150
提交評(píng)論