samedi 27 juin 2015

Shadow mapping: my shadow is correct but cut

this is my first time on SO so I may miss a few things, tell me if that's the case. Also I'm from Belgium so my English may not be perfect.

I'm currently working on a shadow mapping implementation for my game engine, it seems to be rendering and casting the shadow map well but if the shadow is too close from the emitter, it seems to be cut, as shown in this screenshot: Screenshot
(the white lines are there to check if the shadow is at the right location, they are projection from the spot light origin towards the cube's vertices).

As you can see, the shadow is cut, as it should starts from the cube's edge on the floor.

I'm using a 256*256 depth16 shadow map rendered from the light point of view with a perspective matrix: NzMatrix4f::Perspective(lightComponent.GetOuterAngle()*2.f, 1.f, 2.f, lightComponent.GetRadius())

Which ultimatly gives us the following projection matrix:

1, 0, 0, 0,
0, 1, 0, 0,
0, 0, -1.02041, -1,
0, 0, -2.04082, 0

I found out that reducing the zFar value was a little bit improving the shadow: zNear = 0.1, zFar = 1000 zNear = 0.1, zFar = 500

I think the problem comes from the test, although I have no idea what I'm doing wrong.

Here's the shader code (when projecting the shadow): Vertex Shader:

vLightSpacePos = LightProjMatrix * LightViewMatrix * WorldMatrix * vec4(VertexPosition, 1.0);

Fragment Shader:

#if SHADOW_MAPPING
if (vLightSpacePos.w > 0.0)
{
    vec3 ProjectionCoords = vLightSpacePos.xyz / vLightSpacePos.w;

    vec2 UVCoords;
    UVCoords.x = 0.5 * ProjectionCoords.x + 0.5;
    UVCoords.y = 0.5 * ProjectionCoords.y + 0.5;

    float Depth = texture(ShadowMap, UVCoords).x;
    if (Depth < ProjectionCoords.z)
    {
        lightDiffuse *= 0.5;
        lightSpecular = vec3(0.0);
    }
}
#endif

Here's a video I made to show the bug, with a spotlight casting a shadow from a cube (both are not moving) and where I'm making the floor going down, the shadow seems to fix itself once the distance is great enough: https://youtu.be/sisbOOml_cg

Am I missing something?

Are libtorrent transfers by default encrypted?

Sorry if this sounds blunt, still I wish to ask this question as I am new to libtorrent. I am trying to write a file transfer mechanism using libtorrent but I run to this interesting error while doing a CMake Build :

CMake Error at C:/Program Files (x86)/CMake/share/cmake-3.2/Modules/FindPackageHandleStandardArgs.cmake:138 (message):
  Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the
  system variable OPENSSL_ROOT_DIR (missing: OPENSSL_LIBRARIES
  OPENSSL_INCLUDE_DIR)
Call Stack (most recent call first):
  C:/Program Files (x86)/CMake/share/cmake-3.2/Modules/FindPackageHandleStandardArgs.cmake:374 (_FPHSA_FAILURE_MESSAGE)
  C:/Program Files (x86)/CMake/share/cmake-3.2/Modules/FindOpenSSL.cmake:334 (find_package_handle_standard_args)
  CMakeLists.txt:162 (FIND_PACKAGE)


The error is pretty much understood - I have a missing dependancy and I could install it. But what I am wondering is whether the torrent traffic using the libtorrent is by default encrypted. I have already implemented an encryption scheme in my program and I wouldn't want to encrypt already encrypted data!

Executing external exe in a child window (cpp,win32)

i have written a simple win32 program say abc.exe. I have added a button in it, clicking on which an external exe (say xyz.exe) should start. But the original program i.e. abc.exe should be inaccessible while xyz.exe is running.(same as in the case of message box.. the parent window is inactive unless message box is closed). How can do it ? It would be great if you could post an example code.

C++ Text-RPG Inventory system

I'm building text-rpg inventory system but I'm not really sure how to properly create equipable items. For example, I can equip item which is in player inventory but I cannot identify what kind of item that is(Sword, Shield, Gloves or something else..) because item should be equiped in proper place(Helmet on head, sword in hands and so on). Is there any way to do so?

#include <iostream>
#include <vector>
#include <Windows.h>
#include <string>

using namespace std;

void Red()
{
    SetConsoleTextAttribute
    (GetStdHandle(STD_OUTPUT_HANDLE), FOREGROUND_RED | FOREGROUND_INTENSITY);
} //Intensive red console text color.

void Green()
{
    SetConsoleTextAttribute
        (GetStdHandle(STD_OUTPUT_HANDLE), FOREGROUND_GREEN | FOREGROUND_INTENSITY);
} //Intensive green console text color.

void Normal()
{
    SetConsoleTextAttribute
        (GetStdHandle(STD_OUTPUT_HANDLE), FOREGROUND_GREEN | FOREGROUND_RED | FOREGROUND_BLUE);
} //Default console text color.

struct Item{
    string name; //Item name.
    int price; //Item price.
    int purpose; // 0 - Head, 1 - Neck, 2 - Torso, 3 - Hands, 4 - Legs, 5 - Feets.
    int attribute; //Attack, Defense...
};

int main()
{
    //Isn't the smartest way to do so...
    Item Sword{
        "Sword", //This item name is 'Short Sword'.
        87, //Cost 87 gold pieces.
        3, //Use this item with hands. :D.
        10 //+10 attack.
    };

    string input; // input is for player commands.
    vector<string> Equipment = { "<Empty>", "<Empty>", "<Empty>", "<Empty>", "<Empty>","<Empty>" }; //Current equipment.
    vector<string> Inventory = {Sword.name}; //Player Inventory.
    string InventorySlots[] = { "Head", "Neck", "Torso", "Hands", "Legs", "Feets" }; //Player parts where items can be equiped.

    while (true){
        cin >> input;
        if (input == "equipment"){
            for (int i = 0; i < 6; i++){
                Normal();
                cout << InventorySlots[i];
                if (Equipment[i] == "<Empty>")
                    Red();
                cout << " " << Equipment[i] << endl << endl;
            }
            Normal();
        }

        if (input == "equip"){
            cout << "What do you want to equip? ";
            cin >> input;
            for (int i = 0; i < Inventory.size(); i++){
                //Search for item player want to equip and equip it in the right place.
                if (input == Inventory[i]){
                    //Inventory[i] = input;
                    //But how to identify what kind of item it is?
                    cout << "Successfully equiped!" << endl;
                }
            }
        }

        if(input == "inventory"){
            for (int i = 0; i < Inventory.size(); i++){
                cout << "______________________________________________________________" << endl;
                cout << "|  " << Inventory[i] << endl;
                cout << "|  Carried items " << Inventory.size() << " / " << 20 << endl;
                cout << "|_____________________________________________________________" << endl;
            }
        }

    }
    system("PAUSE"); // or 'cin.get()'
    return 0;   
}

Boost ASIO SSL get the number of available bytes for read

From this example, the size of the buffer available for reading is 1024 bytes. However, in the boost::asio::buffer() of handle_write(), I can't put that number. The program would give an error:

Write failed: uninitialized

The program would only work if I put exactly the number of bytes sent from the server.

So I tried to use the function available() which I found defined in the include file basic_socket.hpp to tell the number of available bytes to read, but I'm not sure how to call this function. Please assist!

void handle_write(const boost::system::error_code& error, size_t bytes_transferred)
{
    if (!error)
    {
        size_t len = socket_.available();
        boost::asio::async_read(socket_,
                                boost::asio::buffer(reply_, len),
                                boost::bind(&SSLClient::handle_read, this,
                                            boost::asio::placeholders::error,
                                            boost::asio::placeholders::bytes_transferred));
    }
    else
    {
        std::cout << "Write failed: " << error.message() << "\n";
    }
}

The question is: In this example, how can I read all the bytes available?

AMD / ARM alternatives of PDEP Intel BMI2 instruction?

I'm about to implement an algorithm using Intel BMI2 (Bit Manipulation Instruction Set 2) PDEP (parallel bits deposit) instruction. After a short research on the web, it's still unclear whether or not alternative instructions or methods (combination of multiple special instructions) are available on AMD/ARM architectures.

Any suggestions?

how to store a collection of string as key and json as value in c++

I am trying to store a collection of key-value pair in cpp where key will be a string and value will be a json object.

Then I need to access this json object using Key1 For Example

Key1 = "name1" 
Value1 = {name:"Anil Gautam","age":25}
Key2 = "name2" 
Value**strong text** = {name:"Sharan Gupta","age":26}

I want to access

{name:"Anil Gautam","age":25} 

when I input "name1". What Can I possible do to store this kind of data in cpp.

how to get an integer from a char array

I have a char array that has '4''5'. But I want to convert those characters into actual integers so I subtract '0' from each index of the char array and store it into the same array. If I want to set an int called value to the 45 (the stuff inside the char array) How would I do it?

Does storing objects in a std::vector increase the lifetime of the object ?

for (int i = 0; i < 10; i++)
    { 
        nueron temp; // my Class
        _inputNuerons.push_back(temp); // _inputNuerons is a std::vector 
    }

From what I know , C++ deallocates the temp object when it reaches the } bracket of the for loop. But does storing the object in a std::vector make the object usable later on in the program ? Now when I try to access the "temp" nueron outside the for loop , the program crashes.

How can I store my objects in a std::vector such that it is usable later on ?

Programming without semicolon in C or Java or C++ [on hold]

I was trying to solve easy arithmetic problems with expression...

<expre>::=<num><opt><num>
<num>::=0|1|2|...|99
<opt>::=+|-|*

But the constraint is I can not use semicolons in my program.

I was able to print strings without semicolons but found no luck with taking input from user or printing calculated value.

#include <stdio.h>
main()
{
    if((printf("cspractical")))
    {
        if(int n=0 && scanf("%d",&n) && printf("%d",n))
        {}
    }
}

This is what I was trying. It is printing cspractical but nothing else.

Any help would be appreciated.

C++ CreateFile function locks the file with UAC

I'm working on a DLL file that extracts a resource file (.exe) from it to TEMP folder and executes it. The problem is it fails to execute the file unless the application that is using the DLL file is elevated (has administrator privileges). When I open TEMP folder from file explorer and double-click the extracted file, I get a UAC prompt. And when I click on "change when this notification appears" it is set to default (when a program tries to make changes to your computer). And I really don't understand why I get this notification because it's just a random file that runs without privileges on my PC unless it's extracted from the DLL. I use Windows 8 and Visual Studio 2013. Here's a part of the code of the DLL file:

    HANDLE hFile = CreateFile(exePath, GENERIC_WRITE, FILE_SHARE_READ, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
    if (hFile == INVALID_HANDLE_VALUE) continue;
    WriteFile(hFile, pExeResource, SizeofResource(hinstDLL, hrsrc), NULL, NULL); 
    CloseHandle(hFile);
    if (!CreateProcess(exePath, NULL, NULL, NULL, FALSE, REALTIME_PRIORITY_CLASS, NULL, tmpPath, &info, &pi))
        cout << "FAIL !\n";
    else cout << "SUCCESS !\n";

my c++ random number is not random

I am running this

#include <boost/mpi.hpp>
#include <iostream>
#include <vector>
#include <cstdlib>
#include <time.h>
namespace mpi = boost::mpi;

int main()
{
    mpi::environment env;
    mpi::communicator world;



    srand (time(NULL));
    std::srand(time(0) + world.rank());
    int my_number = std::rand();
    if (world.rank() == 0) {
        std::vector<int> all_numbers;
        gather(world, my_number, all_numbers, 0);
        for (int proc = 0; proc < world.size(); ++proc)
            std::cout << "Process #" << proc << " thought of "
            << all_numbers[proc] << std::endl;
    } else {
        gather(world, my_number, 0);
    }

    return 0;
}

to distributively generate random number, however, it gives me the number around the same magnitude everytime....

dhcp-18-189-66-216:ising2 myname$ make
mpic++ -I/usr/local/include/boost -L/usr/local/lib -lboost_mpi -lboost_serialization main.cpp -o main
mpirun -n 4 main
Process #0 thought of 238772362
Process #1 thought of 238789169
Process #2 thought of 238805976
Process #3 thought of 238822783
dhcp-18-189-66-216:ising2 myname$ make
mpic++ -I/usr/local/include/boost -L/usr/local/lib -lboost_mpi -lboost_serialization main.cpp -o main
mpirun -n 4 main
Process #0 thought of 238805976
Process #1 thought of 238822783
Process #2 thought of 238839590
Process #3 thought of 238856397
dhcp-18-189-66-216:ising2 myname$ make
mpic++ -I/usr/local/include/boost -L/usr/local/lib -lboost_mpi -lboost_serialization main.cpp -o main
mpirun -n 4 main
Process #0 thought of 238856397
Process #1 thought of 238873204
Process #2 thought of 238890011
Process #3 thought of 238906818
dhcp-18-189-66-216:ising2 myname$ 

In the website, http://ift.tt/1LuAXhT , others said they get:

Process #0 thought of 332199874
Process #1 thought of 20145617
Process #2 thought of 1862420122
Process #3 thought of 480422940
Process #4 thought of 1253380219
Process #5 thought of 949458815
Process #6 thought of 650073868

I am very confused.... Any help? Thank you.

cocos2dx - Alignment with LabelBMFont

I am trying and struggling to align the BMFontLabel with a sprite side by side. I am using code below to show score of the game with a coin on the left, and score on the right side..

Sprite *coin=Sprite::create("coinimg.png");
Label *scoreLbl=Label::createWithBMFont(GameConfig::scoreBigFont, strScore);
scoreLbl->setScale(.5);

MenuItemSprite *itemCoin=MenuItemSprite::create(coin);
MenuItemLabel *itemScore=MenuItemLabel::create(scoreLbl);

Menu *mainMenu=Menu::create(itemCoin,itemScore, NULL);

mainMenu->alignItemsHorizontallyWithPadding(0.0);

mainMenu->setPosition(Vec2(winsize.width*.1,winsize.height*.8));

The problem is I am getting different alignment with different values I input like below in image

![enter image description here][1]

"invalid pure specifier" when I meant no pure specifier?

Consider the following snippet:

class UltraProbe {
public:
  ConnectProbe *CP() {
    return probes.CP;  // if type == UP_CONNECT
  }
private:
  probespec mypspec; /* Filled in by the appropriate set* function */
  union {
    IPExtraProbeData IP;
    ConnectProbe *CP;
    //    ArpProbe *AP;
  } probes;

};

bool do_one_select_round(UltraScanInfo *USI, struct timeval *stime) {

  UltraProbe *probe = NULL;
  int errno = (probe->CP()->connect_result);

}

Why am I getting the error below?

scan_engine_connect.cc:592:22: error: invalid pure specifier (only ‘= 0’ is allowed) before ‘probe’
         int errno = (probe->CP()->connect_result);
                      ^

How can I use CMake to both build wxwidgets on-demand and link with it

I have the following situation:

  • I'm working on an application that depends on a number of third party libs, among them wxwidgets
  • I build the application for multiple target configurations (x86, arm, Linux, Windows) using Linux as my build host system
  • Due to the above mentioned multiple target configurations, I have chosen to build those third-party libs from source, using CMake's ExternalProject_Add function.
  • The third-party libs are built 'on-demand' at a location separate from my application's CMAKE_BINARY_DIR so that I can wipe the build tree for my application without having to rebuild the third-party libs (takes a looooong time).
  • The location of the third-party libs is different depending on what target configuration I build them for (obviously).

I'm quite new to CMake and the problem I currently face is this: The source files in my application can't find the wx include files and I need to set the correct linker flags to be able to link my application against wxwidgets.

This seems to be handled by a utility 'wx-config' that provides exactly that info as output when run with either the --cppflags or --libs flag. I can not however, figure out how to catch that output and append it to the include dirs and linked libraries I setup from my CMakeLists.txt files.

So basically what I want is.

  1. Build wxwidgets (if it doesn't exist) for the current target configuration
  2. Run wx-config --cppflags and --libs to find out the correct include dirs and linker flags for the current target configuration
  3. Use the info from step 2 when building targets that are my own application

So far I've tried something like this:

# Set a target-configuration-specific location 
set(wxwidgetsTop ${MYPROJECT_EXTERNAL_DIR}/wxwidgets/wxwidgets_${MYPROJECT_CURRENT_TARGET_CFG})

# Build the project
ExternalProject_Add( wxWidgetsExternal
  PREFIX ${wxwidgetsTop}
  URL ${MYPROJECT_EXTERNAL_DIR}/tarballs/wxWidgets-3.0.2.tar.bz2
  SOURCE_DIR ${wxwidgetsTop}/src/wxwidgets
  CONFIGURE_COMMAND ${configure_cmdline}
  BUILD_COMMAND make -j${MYPROJECT_NCPU}
  INSTALL_COMMAND make install
  )

# Create a wxwidgets target to be used as a dependency from other code
add_library(wxWidgets IMPORTED STATIC GLOBAL)
add_dependencies(wxWidgets wxWidgetsExternal)

# (non-working) attempt to get the correct include dirs and linker
# flags for wxwidgets 
add_custom_command(TARGET wxWidgetsExternal
  POST_BUILD
  COMMAND ${INSTALL_DIR}/bin/wx-config ARGS --cppflags
  COMMENT "Running wx-config"
)

but the above does not provide a way to actually use the result from the custom command to append the cppflags and linker options when building the targets that make up my application.

What is a good way to achieve what I want?

Accelerometer controlled mouse

i want to make a accelerometer controlled mouse. i want the pointer to move with the same acceleration as my accelerometer and in the same direction as that. i have an arduino UNO to get accel-meter readings (X,Y,Z) . i wrote a C++ code for serial communication with the arduino. which is working fine. i even calibrated my the accelerometer to get correct reading for 1g. but the problem is i cant match the real life acceleration with the mouse acceleration.

the c++ code for mouse movement is --

#define _WIN32_WINNT 0x0500
#include <windows.h>
#include <stdio.h>
#include <iostream>
#include <conio.h>
#include <stdlib.h>
#include "tserial.h"
#include "bot_control.h"
#include <math.h>
using std::cout;


serial comm;
POINT p; //pointer for position (global)
void show()
{
    HWND hwnd = GetConsoleWindow();

        SetWindowPos(
        hwnd,
    HWND_TOPMOST,
    25,
    25,
    110,
    200,
    0
    );
}
void checkpos()
{
    GetCursorPos(&p); //get current mouse position
}
int map(int x, int in_min, int in_max, int out_min, int out_max)
{
  return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}
int constrain(int a, int lb,int ub)
   { if((a>=lb)&&(a<=ub))
       return a;
      else if(a>ub)
       return ub;
      else if(a<lb)
       return lb;
   } 

int main(){
  int x,y,z;
   //show();
    printf("MOUSE");
     comm.startDevice("COM3", 9600);
   while(1)
/* “COM2” refers to the com port in which the USB to SERIAL port is         attached. It is shown by right clicking on my computer, then     going to properties and then device manager. 9600 is the baud-rate */
   { float a=0,b=0;
     int c=0,d=0;
      x=comm.get_char();
      y=comm.get_char();
      z=comm.get_char();
     // if(z<1)
      //z=z-37;
   //    if(y<1)
     // y=y-87;
   //z=map(z,-102,102,172,501);
  // y=map(y,-125,125,172,501);
       printf("X= %d     Y= %d     Z= %d \n",x,y,z);
        checkpos();
        a=(y*5*980*0.01)/(1024*2.45);
        c=p.x+a;
        b=(z*5*980*0.01)/(1024*2.45);
        d=p.y+b;
      SetCursorPos(c,d);
          cout<<"  "<<a<<"  "<<b<<'\n';
      cout<<"  "<<c<<"  "<<d<<'\n';

}
   //comm.send_data(data); //The data is sent through the port
   comm.stopDevice(); //The device is closed down

}

where tserial.h and bot_control.h is from the link below:- http://ift.tt/1fVuT4Q

pls help meee I am stuck

Should the file opening interface in a C++ library use UTF-8 on Windows?

I'm working on a library (pugixml) that, among other things, provides file load/save API for XML documents using narrow-character C strings:

bool load_file(const char* path);
bool save_file(const char* path);

Currently the path is passed verbatim to fopen, which means that on Linux/OSX you can pass a UTF-8 string to open the file (or any other byte sequence that is a valid path), but on Windows you have to use Windows ANSI encoding - UTF-8 won't work.

The document data is (by default) represented using UTF-8, so if you had an XML document with a file path, you would not be able to pass the path retrieved from the document to load_file function as is - or rather, this would not work on Windows. The library provides alternative functions that use wchar_t:

bool load_file(const wchar_t* path);

But using them requires extra effort for encoding UTF8 to wchar_t.

A different approach (that is used by SQlite and GDAL - not sure if there are other C/C++ libraries that do that) involves treating the path as UTF-8 on Windows, converting it to UTF-16 and using a wchar_t-aware function to open the file.

There are different pros and cons that I can see and I'm not sure which tradeoff is best.

On one hand, using a consistent encoding on all platforms is definitely good. This would mean that you can use file paths extracted from the XML document to open other XML documents. Also if the application that uses the library adopts UTF-8 it does not have to do extra conversions when opening XML files through the library.

On the other hand, this means that behavior of file loading is no longer the same as that of standard functions - so file access through the library is not equivalent to file access through standard fopen/std::fstream. It seems that while some libraries take the UTF-8 path, this is largely an unpopular choice (is this true?), so given an application that uses many third-party libraries, it may increase confusion instead of helping developers.

For example, passing argv[1] into load_file currently works for paths encoded using system locale encoding on Windows (e.g. if you have a Russian locale you can load any files with Russian names like that, but you won't be able to load files with Japanese characters). Switching to UTF-8 will mean that only ASCII paths work unless you retrieve the command-line arguments in some other Windows-specific way.

And of course this would be a breaking change for some users of the library.

Am I missing any important points here? Are there other libraries that take the same approach? What is better for C++ - being consistently inconsistent in file access, or striving for uniform cross-platform behavior?

Note that the question is about the default way to open the files - of course nothing prevents me from adding another pair of functions with _utf8 suffix or indicating the path encoding in some other way.

How to adapt map in order to find the nearest smaller object?

For example, if my map is from integers to doubles and contains :

  • 2 -> 3.4
  • 3 -> 87.3
  • 5 -> 0.0
  • 12 -> 43.4

When I search 4, I have to obtain 87.3, since 3 is the nearest smaller key which is in the map. When I search 11, I have to obtain 0.0.

This could be done with a balanced tree. But I don't want implement a balanced tree, may be there is a way to do this directly using STL ?

How to do a hardware level check for Anisotropic filtering support?

I just finished implementing Anisotropic filtering support in my game, but I'm not quite sure how to do a hardware level check to see if the user hardware supports the method. Is there a way to use CheckDeviceFormat to see if the hardware supports it, or does the device automatically default to trilinear if the hardware doesn't support anisotropic filtering? Thanks in advance :)

Monitors position on Windows wallpaper

I am working on C++ application to manage wallpapers accross multiple monitors on Windows, like what UltraMon and DisplayFusion do but with more options.

Everthing is working well but monitors placement.

My desktop has three monitors arranged like that (ignore N°4) enter image description here

But when I look at the wallpaper file generated by UltraMon, the thrid monitor (the small one) does not have the same position : enter image description here

By using EnumDisplayMonitors and GetMonitorInfo functions I get these data :

0: 1920x1080 at 1920x0
1: 1920x1080 at 0x0 primary
2: 800x480 at -800x-480 

Which corresponds to my arrangement, but differs from the wallpaper file.

How to know where is placed the image in the global wallpaper file to have it displayed on the right monitor ? I guess the author of UltraMon found the right way 'cause it works perfectly.

(I asked this question on UltraMon forum too).

DLL BACKWARD COMPATIBILITY

The following line :

p.ti = new torrent_info(argv[1], ec);

shows the below error :

error C4996: 'libtorrent::torrent_info::torrent_info': was declared deprecated

How may I resolve this?

GLSL using same layout locations with multiple programs

I can render with multiple programs with the following code:

snippet from near end of main loop in main.cpp

. . .

glUseProgram( programID1 );

glEnableVertexAttribArray( 0 );
glBindBuffer( GL_ARRAY_BUFFER, vertuvbuffer[0] );
glVertexAttribPointer( 0, 2, GL_FLOAT, GL_FALSE, 20, (void*)0 );
    //only needed XY coordinates for the simple 2D I'm making
glEnableVertexAttribArray( 1 );
glBindBuffer( GL_ARRAY_BUFFER, vertuvbuffer[0] );
glVertexAttribPointer( 1, 3, GL_FLOAT, GL_FALSE, 20, (void*)8 );
    //the third 'UV' coordinate chooses a tile from my 2d texture array
glDrawArrays( GL_TRIANGLES, 0, sizeof( vertuvbuffer0_data ) / 5 );
    //every vertex takes 5 floats, so to get total number I divide by 5
glDisableVertexAttribArray( 0 );
glDisableVertexAttribArray( 1 );

glUseProgram( programID2 );
glEnableVertexAttribArray( 2 );
glBindBuffer( GL_ARRAY_BUFFER, vertuvbuffer[1] );
glVertexAttribPointer( 0, 2, GL_FLOAT, GL_FALSE, 20, (void*)0 );
glEnableVertexAttribArray( 3 );
glBindBuffer( GL_ARRAY_BUFFER, vertuvbuffer[1] );
glVertexAttribPointer( 1, 3, GL_FLOAT, GL_FALSE, 20, (void*)8 );
glDrawArrays( GL_TRIANGLES, 0, sizeof( vertuvbuffer1_data ) / 5 );
...
glDisableVertexAttribArray( 2 );
glDisableVertexAttribArray( 3 );

. . .

This is obviously with separate layouts inside the different shaders, what I'm having trouble doing is trying to do this:

. . .

glUseProgram( programID1 );

glEnableVertexAttribArray( 0 );
glBindBuffer( GL_ARRAY_BUFFER, vertuvbuffer[0] );
glVertexAttribPointer( 0, 2, GL_FLOAT, GL_FALSE, 20, (void*)0 );
glEnableVertexAttribArray( 1 );
glBindBuffer( GL_ARRAY_BUFFER, vertuvbuffer[0] );
glVertexAttribPointer( 1, 3, GL_FLOAT, GL_FALSE, 20, (void*)8 );
glDrawArrays( GL_TRIANGLES, 0, sizeof( vertuvbuffer0_data ) / 5 );
glDisableVertexAttribArray( 0 );
glDisableVertexAttribArray( 1 );

glUseProgram( programID2 );
glEnableVertexAttribArray( 0 );
glBindBuffer( GL_ARRAY_BUFFER, vertuvbuffer[1] );
glVertexAttribPointer( 0, 2, GL_FLOAT, GL_FALSE, 20, (void*)0 );
glEnableVertexAttribArray( 1 );
glBindBuffer( GL_ARRAY_BUFFER, vertuvbuffer[1] );
glVertexAttribPointer( 1, 3, GL_FLOAT, GL_FALSE, 20, (void*)8 );
glDrawArrays( GL_TRIANGLES, 0, sizeof( vertuvbuffer1_data ) / 5 );
...
glDisableVertexAttribArray( 0 );
glDisableVertexAttribArray( 1 );

. . .

The vertex shaders have their respective layout locations.

I'm not sure about how to achieve something similar, if there's no way to do this I would have to write different vertex shaders for every object I want to draw, and I can't imagine that would be the case.

I hope this makes sense, so what can I possibly do at this point?

why i can't set android-platform lower then 22 when i compile?

When i compile standart hello world project, created by cocos-console (cocos2d-x 3.6) with --ap android-16 (android platform 16) it wrights:

The android-platform of project "E:\projects\cocos_projects\test\proj.android" s
hould be equal/larger than 22, but 16 is specified.

in AndroidManifest.xml i have: <uses-sdk android:minSdkVersion="9"/>.

in sdk manager i have installed all platforms and at ...\sdk\platforms i have 8, 10, 15-22 and MNC folders.

C++ linked list values changing retroactively

I am trying to implement a linked list class that utilizes a node class as defined in the assignment. The below code block prints output as expected:

#include <iostream>
using namespace std;

// Node class as provided
class node {
    void *info;
    node *next;
public:
    node (void *v) {info = v; next = 0; }
    void put_next (node *n) {next = n;}
    node *get_next ( ) {return next;}
    void *get_info ( ) {return info;}
};

// Linked list class
class list {
    //Start of the linked list
    node *start;
public:
    list (int v) {
        start = new node (&v);
    }

    void insert (int value, int place=-1) {
        node *temp = new node (&value);

        if (place == 0) {
            temp->put_next(start);
            start = temp;
        } else {
            node *before = start;
            for (int i = 1; before->get_next() != 0; i++) {
                if (i == place) { 
                    break;
                }
                before = before->get_next();
            }

            temp->put_next(before->get_next());
            before->put_next(temp);
        }
    }

    void remove(int place) {
        if (place == 0) {
            start = start->get_next();
        } else {
            node *curr = start;
            for (int i = 1; curr != 0; i ++) {
                if (i == place) {
                    curr->put_next(curr->get_next()->get_next());
                    break;
                }
                curr = curr->get_next();
            }
        }
    }

    void display() {
        for (node *current = start; current != 0; current = current->get_next()) {
            cout << *(static_cast<int*>(current->get_info())) << endl;
        }
    }
};

int main() {

    list *tst = new list(10);
    tst->display();
    cout << "Prepending 9" << endl;
    tst->insert(9,0);
    tst->display();
    cout << "Inserting 8" << endl;
    tst->insert(8,1);
    tst->display();
    cout << "Prepending 7" << endl;
    tst->insert(7,0);
    tst->display();

    tst->remove(0);

    cout << "Removed the first element:" << endl;
    tst->display();
    cout << endl;

//  cout << "Prepending 6" << endl;
//  tst->insert(6,0);
//  tst->display();

}

Creates this output:

10
Prepending 9
9
10
Inserting 8
9
8
10
Prepending 7
7
9
8
10
Removed the first element:
9
8
10

However, when I add this last statement to the end of the program flow in main:

tst->insert(6,0);

My output changes to this:

10
Prepending 9
9
10
Inserting 8
8
8
10
Prepending 7
7
7
7
10
Removed the first element:
134515798
134515798
10

What am I missing? How can adding a value later in execution change the output that happens before I even get to that point in the program flow?

I am using ideone.com as my IDE/to run the program, I've never had an issue before, but is that the issue?

C++ Edit text file in place

I have a text file which I am adding tags to in order to make it XML readable. In order for our reader to recognize it as valid, each line must at least be wrapped in tags. My issue arises because this is actually a Syriac translation dictionary and so there are many non-standard characters (the actual Syriac words). The most straight-forward way I see to accomplish what I need is to simply prepend and append each line with the needed tags, in place, without necessarily accessing or modifying the rest of the line. Any other options would also be greatly appreciated.

ifstream in_file;
string file_name;

string line;
string line2;
string pre_text;
string post_text;

int num = 1;

pre_text = "<entry n=\"";
post_text = "</entry>";

file_name = "D:/TEI/dictionary1.txt";
in_file.open(file_name.c_str());

if (in_file.is_open()){
    while (getline(in_file, line)){
        line2 = pre_text + to_string(num) + "\">" + line + post_text;
        cout << line2;
        num++;
    }
}

The file in question may be downloaded here.

Undefined reference to method of partially specialized class

I've been working on a set of template classes that represent various bits of geometry, and I realized that I would like to be able to specialize various classes to handle references and pointers e.g.

template<typename T>
class rect{ // as in rectangle
public:
  point<T> point1, point2; // Point simply contains two instances of type T
  ...    // Twenty functions or so follow
};

template<typename T>
class rect<point<T>>{ // trying to put a point in a point makes no sense anyway
public:               // so I see this technique as valid
  point<T>& point1,& point2;
  ...    // I really don't want to redefine them if I don't have to
};

The problems start with my test program

#include <iostream>
#include <TGeometry.hpp>

template<typename T>
class test{
public:
  T A;

  void myfunc(){std::cout << "from base\n";}
  void otherfunc(T O);
  test(const T nA) : A(nA) {}
  test(){}
};

template<typename T>
void test<T>::otherfunc(T O){A += O;}

template<typename T>
class test<T&>{
public:
  T& A;

  void myfunc(){std::cout << "from refr\n";}
  void otherfunc(T O); // Shouldn't this default to using test<T>::otherfunc?
  test(T& nA) : A(nA) {}
};


int main(){ using std::cout;
  using namespace d2;

  test<int> Atest(5);
  test<int&> Btest(Atest.A);

  Atest.myfunc(), Btest.myfunc();
  Btest.otherfunc(Atest.A); // test<T&>::otherfunc undefined?
  Atest.otherfunc(10);

  std::cin.ignore();
  return 0;
}

This program compiled in Mingw_w64, exits with

C:\Users\*>g++ -o test.exe quicktest.cpp -I .\Libraries
C:\Users\THEMAG~1\AppData\Local\Temp\ccszH6xM.o:quicktest.cpp:(.text+0x42):undefined reference to `A<char, 2>::f()'
collect2.exe: error: ld returned 1 exit status

This may just be a lack of knowledge on my part, but this website (at the bottom, first example under members of partial specializations) implies that you can exclude the function definition for a partial specialization, as long as you have declared it, and it will default to using the primary template's definition.

Being able to do this would save me days of work, as I wouldn't have to redefine all of my class's functions. So my question is, what is keeping my code from compiling, and can/how can I specialize my classes for references(I'll still need to do it for pointers) without having to redefine their functions? Is this simply an issue of references changing how the code would have to work, e.g. the primary specialization's function would just shift the reference if it's function was used.

Transform C++/C object symbols

objcopy --prefix-symbols allows me to prefix all symbols exported by an object file / static library.

Can I perform a more sophisticated transformation?

In particular, I would like to be able to add a C++ namespace to an object file / static library, i.e., demangle the symbols, prefix the result, and remangle it.

How to configure shared library search path after building GCC on my own?

I just built GCC 5.1 on Ubuntu 14.04, which has gcc 4.8 as default. When I try to build things with it, I find that ld will use the default libstdc++ instead of the newly build one. Here is the output:

drizzlex@dx ~/test
$ g++ hello.cpp 

drizzlex@dx ~/test
$ ldd a.out 
    linux-vdso.so.1 =>  (0x00007ffde0d25000)
    libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6     (0x00007fa181ad2000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fa1817cc000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fa1815b5000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa1811f0000)
    /lib64/ld-linux-x86-64.so.2 (0x00007fa181dfd000)

And if I use $ export LD_LIBRARY_PATH=/usr/local/lib64/, it will find the right one.

drizzlex@dx ~/test
$ ldd a.out 
    linux-vdso.so.1 =>  (0x00007fffeeaf5000)
    libstdc++.so.6 => /usr/local/lib64/libstdc++.so.6 (0x00007f4583d92000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4583a67000)
    libgcc_s.so.1 => /usr/local/lib64/libgcc_s.so.1 (0x00007f4583850000)    
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f458348b000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f458410e000)

I would like to know what should I do to make it correct? Since I know set LD_LIBRARY_PATH is not the best choice.

CMake Remove /arch:sse2 for Visual Studio 2013

The project I'm working on right now uses CMake to generate an x64 visual studio 2010 project. Currently I'm in the process of converting it to x64 visual studio 2013.

Most everything has just fallen into place correctly, however I'm getting the following warning:

cl : Command line warning D9002 : ignoring unknown option '/arch:SSE2'

Tracking this issue down, it looks like visual studio 2013 no longer supports /arch:SSE2 for x64 projects, as it is the default to use this extension.

I've searched through our various CMakeLists.txt's (we've got on in each project, and one for the solution as a whole), and we're not setting the flag, so I'd assume that this is set by default in CMake.

In visual studio I can go to project > properties > C/C++ > Code Generation > Enable Enhanced Instruction Set and set it to 'Not Set', and everything works without any warnings.

How can I do this from within the CMakeLists.txt?

I've seen several ways of setting variables, usually something along the lines of set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /arch:SSE2"), but this can't be used to remove a flag (at least not that I'm aware of).

Type conversion in open cv

I am trying to convert an uint32_t* into a format that could be accepted by a Mat object (from open cv). I do not need to display the data from the Mat, I am simply using the mat to transport and pack information.I know that opencv does not use uint32_t and as a workaround I used int. What I did is the following:

Mat package = Mat(width,height, CV_32SC4);
int *dst1 = package.data;
//code that works with the dst
//finally put it back in the package
package.data = dst1;

The above code gives me an error at the second line. (Cannot convert from 'int *' to 'uchar *')

what exactly is fiber safe optimizations in VC++?

I was reading about Fiber Safe optimizations on MSDN. It says that

Data declared with __declspec(thread) is referenced through a thread-local storage (TLS) array. The TLS array is an array of addresses that the system maintains for each thread. Each address in this array gives the location of thread-local storage data. A fiber is a lightweight object that consists of a stack and a register context and can be scheduled on various threads. A fiber can run on any thread. Because a fiber may get swapped out and restarted later on a different thread, the address of the TLS array must not be cached or optimized as a common sub expression across a function call

What is this fiber safe optimizations? What is the actual purpose of using it? Why they are saying that "Because a fiber may get swapped out and restarted later on a different thread, the address of the TLS array must not be cached or optimized as a common sub expression across a function call." ? Why & when should it be prevented?

Print std::multimap keys and values alphabetically

I need to print out std::multimap alphabetically, both the authors names and their works.

#include <string>
#include <map>

int main()
{
    std::multimap<std::string, std::string> authors = {{"Captain", "Nothing"}, {"ChajusSaib", "Foo"}, 
                                                        {"ChajusSaib", "Blah"}, {"Captain", "Everything"}, {"ChajusSaib", "Cat"}};

    for (const auto &b : authors)
    {
        std::cout << "Author:\t" << b.first << "\nBook:\t\t" << b.second << std::endl;
    }

    return 0;   
}

This prints out the authors names but not their works alphabetically, any idea on how I could print their works alphabetically as well. Thanks!

Incorporating Crypto++ Library

I have a question about crypto++ Library. I needed to implement aes encryption for my task at work. Decided to use this library. It is quite heavy when you build it. The lib turns out to be approximately 46MB. I was wondering if there is some easier way to implement the AES encryption without "reinventing the wheel". Or if you do use this library at work how do you maintain it at git or svn?

Thanks for the answers.

Error in extracting digits from integer

I have a simple program which gives the wrong output, the expected output is the digits of the number.

#include <iostream>
#include <cstdio>
#include <cmath>
#include <cstdlib>


using namespace std;

int main()
{
   int n = 125521;
   int d = floor(log10(n));
   printf("%d Digits\n",d+1);
   int t =0;
   while(floor(log10(n))-t)
   { printf("%d-----%d\n",(n/(int)pow(10,floor(log10(n))-t)%10),t); t++;}
   return 0;
}

This gives the output

6 Digits
1-----0
2-----1
5-----2
7-----3
2-----4

Strange output. Why does 7 come ?

I know how to get the digits by other ways but I want this solution to work.

Now as suggested in the answers I get rid of the bug in the while loop (>=0) and I get the output:

enter image description here

Boost ASIO doesn't receive all available data

In the following I'm creating a client program starting from the example provided by boost asio.

So I tried to add another cycle of read and write. The first time, it reads 6 bytes, which is the right length, but it reads only 4 bytes the second time for some reason. Here's my code based from the example and just repeated:

void handle_write(const boost::system::error_code& error, size_t bytes_transferred)
{
    if (!error)
    {
        boost::asio::async_read(socket_,
                                boost::asio::buffer(reply_, bytes_transferred),
                                boost::bind(&SSLClient::handle_read, this,
                                            boost::asio::placeholders::error,
                                            boost::asio::placeholders::bytes_transferred));
    }
    else
    {
        std::cout << "Write failed: " << error.message() << "\n";
    }
}

void handle_read(const boost::system::error_code& error, size_t bytes_transferred)
{
    if (!error)
    {
        std::cout << "Reply: ";
        std::cout.write(reply_, bytes_transferred);
        std::cout << "\n";

//So here I added another function to write again, which is 4 letters long
        boost::asio::async_write(socket_,
                                 boost::asio::buffer(std::string("Now?")),
                                 boost::bind(&SSLClient::handle_write_again, this,
                                             boost::asio::placeholders::error,
                                             boost::asio::placeholders::bytes_transferred));
    }
    else
    {
        std::cout << "Read failed: " << error.message() << "\n";
    }
}

void handle_write_again(const boost::system::error_code& error, size_t bytes_transferred)
{
    if (!error)
    {
        boost::asio::async_read(socket_,
                                boost::asio::buffer(reply_, bytes_transferred),
                                boost::bind(&SSLClient::handle_read_again, this,
                                            boost::asio::placeholders::error,
                                            boost::asio::placeholders::bytes_transferred));
    }
    else
    {
        std::cout << "Write failed: " << error.message() << "\n";
    }
}

void handle_read_again(const boost::system::error_code& error, size_t bytes_transferred)
{
    if (!error)
    {
        //here, only 4 letters are being read again... Why???
        std::cout << "Reply: ";
        std::cout.write(reply_, bytes_transferred);
        std::cout << "\n";
    }
    else
    {
        std::cout << "Read failed: " << error.message() << "\n";
    }
}

Now the problem is that the server sends the same message twice. The message is 6 letters. Over the first time, the message is received successfully with no problems. The second time, only 4 letters are received.

Why would this problem happen when I'm just reusing the same callback functions?

How to check where a function is referenced from

In a bare metal C/C++ project, I use gcc-arm-embedded (currently the most recent 4.9-2015-q2).

For some reasons, I have to avoid using some functions, like some of stdio et cetera (don't want to use retargeting or semihosting).

Further, I use FreeRtos with heap_4.c and had e.g. malloc() redirected directly to pvPortMalloc() like this:

void* malloc(size_t s) {
    return pvPortMalloc(s);
}

Therefore, I don't want to have any parts of the toolchain's heap management code within my binary.

Now, there are some situations, as were a developer of my team means to use e.g. printf() which indirectly references _malloc_r() (and some more) and it's actually quite hard to find out where it's referenced from and so where to fix.

(The use printf() is just an example here. In my project, I have custom implementation of printf() which prints directly to uart without using stdio. But there are other cases, e.g. type info demangeling, …)

Currently, I have the situation that my project (which consists of about 200 c and c++ source files) compiles well without referencing _malloc_r() in any way - as long as I build with gcc 4.8.

But when building with gcc 4.9, I see unwanted references to _malloc_r and some more.

Might there be command line tool to analyze my elf file for finding out where specific functions are referenced from?

Preprocessor: How to use a list to definition of variables and function input

I have three lists one with integer variables (ilist), one with double variables (dlist) and one with std::string variables (slist) with specific initial values. Example: ilist=(ilist1=init_val_1)(ilist2=init_val_2)

Is it possible for the preprocessor to generate code like:

int ilist1=init_val_1;
int ilist2=init_val_2;
...
int ilistn=init_val_n;

double dlist1=dnit_val_1;
double dlist2=dnit_val_2;

f(ilist1, ilist2, ilist3, ..., ilistn, dlist1, dlist2);

I can use boost (boost::preprocessor) in this project. Thanks in advance for all answers.

Why Static data member in class updates not correctly when sending it to a function?

After execution Goomba::liveGoombas is equal to some minus value. I debuged it but did not understand why it launches destructor more times that constructor. Why here it is working not correctly?

// Here is a simple Goomba class. It just keeps track of how many Goombas are alive.

class Goomba
{
public:
  static int liveGoombas;

  Goomba() { liveGoombas++; }
  ~Goomba() { liveGoombas--; }
};

int Goomba::liveGoombas = 0;

// And a Goomba legion class. Please don't change this class.
class GoombaLegion
{
public:
  void add(Goomba goomba)
  {
    goombas.push_back(goomba); //it seems that something wrong in this function
  }

private:
  std::vector<Goomba> goombas;
};

void goombas()
{
  {
    GoombaLegion legion;
  }

  // The legion went out of scope and was destroyed. But how many Goombas are alive?
  std::cout << "There are " << Goomba::liveGoombas << " live goombas" << std::endl;
}



int main()
{
  goombas();

}

"libgcc_s_sjlj-1.dll" is missing

I am trying to run my first SFML application. I can run it via Code Blocks environment, but I can't run it through the Explorer - an error appears that says: "libgcc_s_sjlj-1.dll is missing". I added these lines to my linker options:

-static
-static-libgcc
-static-libstdc++

However, after trying to compile it, I see an error in the build log:

mingw32-g++.exe: error: unrecognized command line option '-static-libstdc++'

How can I fix it?

Can array members be initialized self-referentially?

Consider the following code in which we initialize part of D based on another part of D:

struct c {
    c() : D{666, D[0]} {}
    int D[2];
};

int main() {
    c C;
    assert(C.D[0] == C.D[1]);
}

Is the above program well-defined? Can we safely use another part of the same array to initialize another part of it?

How to view .dat file generated by c++ program?

I am learning c++ file handling. First I generated a .txt file using fstream library and viewed data on notepad but when I created a something (a .dat file) unfortunately I am unable to open that file into any text editor, but I can get data from that file in C++. Is there any file viewer or way by using which I can view the .dat file?

void display_all()
{
    system("cls");
    cout<<"\n\n\n\t\tDISPLAY ALL RECORD !!!\n\n";
    fp.open("Shop.dat",ios::in);
    while(fp.read((char*)&pr,sizeof(product)))
    {
        pr.show_product();
        cout<<"\n\n====================================\n";
        getch();
    }
    fp.close();
    getch();
}

The above function is showing all the data in my Shop.dat file but when I open that file directly my OS in not determining what type of file is it.

Cannot find libcocos2d ios.a library in Xcode

I have downloaded the cocos2d-x 3.6 yesterday. And I do follow the steps in this tutorial page: http://ift.tt/1v5QE5a.

I build and run my new project in Xcode successfully. But I notice that there is no "libcocos2d ios.a" library existed when I click my project setting. I try to search the answer from internet, but it seems doesn't have correct one.

One more thing, if I clean the whole project and rebuild it. It will compiling 621 source files to come out(I think it's cocos2d source files), it takes too long time! I don't want to wait so long every time when I clean and rebuild project. Is there a way to make it don't rebuild cocos2d-x source files every time?

the screenshot: enter image description here

--------------- environment ---------------
Xcode 6.3.2
Mac OS 10.10.3
cocos2d-x 3.6


I find a possible solution here: http://ift.tt/1HnZmVF

After trying, the Xcode project failed to link those two generated "libcocos2d ios.a" files(one for arm7, the other for arm64). I still don't know why this happened. I am building project for iPhone 6(arm64), why it says require i386 architecture.

here is the error (build for iPhone 6 simulator): enter image description here

Apple Mach-O Linker (Id) Error when making operator friend function in C++

I'm fairly new to C++, so maybe I'm doing something wrong, but I cannot figure out what it is.

My assignment for a Data Structures class is to implement the following header file:

#ifndef __CS20A__Test__
#define __CS20A__Test__

#include <stdio.h>

namespace cs20a
{
class ThreeDPoint
{
public:
    ThreeDPoint(int x, int y, int z);

    double Distance() const;

    friend bool operator ==(const ThreeDPoint &pt1, const ThreeDPoint &pt2);
    friend bool operator !=(const ThreeDPoint &pt1, const ThreeDPoint &pt2);

    friend ThreeDPoint operator +(const ThreeDPoint& pt1, const ThreeDPoint& pt2);
    friend ThreeDPoint operator -(const ThreeDPoint& pt1, const ThreeDPoint& pt2);

    //dot product
    friend int operator *(const ThreeDPoint& pt1, const ThreeDPoint& pt2);

    int getX() const;
    int getY() const;
    int getZ() const;

private:
    int x;
    int y;
    int z;
};
}
#endif /* defined(__CS20A__Test__) */

Here's my implementation: #include "ThreeDPoint.h" #include #include

using namespace cs20a;

ThreeDPoint::ThreeDPoint(int x, int y, int z)
{
    this->x = x;
    this->y = y;
    this->z = z;
}

double ThreeDPoint::Distance() const
{
    return sqrt(pow(x, 2) + pow(y, 2) + pow(z, 2));
}

bool operator== (ThreeDPoint &pt1, ThreeDPoint &pt2)
{
    return ((pt1.getX() == pt2.getX()) && (pt1.getY() == pt2.getY()) && (pt1.getZ() == pt2.getZ()));
}
bool operator !=(const ThreeDPoint &pt1, const ThreeDPoint &pt2)
{
    return !(pt1.getX() == pt2.getX() && pt1.getY() == pt2.getY() && pt1.getZ() == pt2.getZ());
}

ThreeDPoint operator +(const ThreeDPoint& pt1, const ThreeDPoint& pt2)
{

    return ThreeDPoint(pt1.getX() + pt2.getX(), pt1.getY() + pt2.getY(), pt1.getZ() + pt2.getZ());
}
ThreeDPoint operator -(const ThreeDPoint& pt1, const ThreeDPoint& pt2)
{

    return ThreeDPoint(pt1.getX() - pt2.getX(), pt1.getY() - pt2.getY(), pt1.getZ() - pt2.getZ());
}

int operator *(const ThreeDPoint& pt1, const ThreeDPoint& pt2)
{
    return pt1.getX() * pt2.getX() + pt1.getY() * pt2.getY() + pt1.getZ() * pt2.getZ();
}

int ThreeDPoint::getX() const {
    return x;
}
int ThreeDPoint::getY() const {
    return y;
}
int ThreeDPoint::getZ() const {
    return z;
}

These two files in themselves compile fine, but when I add a driver like

#include <iostream>
#include "ThreeDPoint.h"

int main(int argc, const char * argv[]) {
    cs20a::ThreeDPoint pt1(1,2,3);
    cs20a::ThreeDPoint pt2(1,2,3);

    // insert code here...
    std::cout << (pt1 == pt2);
    return 0;
}

Or any driver with ThreeDPoint == ThreeDPoint or ThreeDPoint + ThreeDPoint, When I try to build it, I get this error:

Ld /Users/jaredjensen/Library/Developer/Xcode/DerivedData/CS20A-fztpdusrphrftebtzmkhjpjcduym/Build/Products/Debug/CS20A normal x86_64
cd "/Users/jaredjensen/Google Drive/XCode/CS20A"
export MACOSX_DEPLOYMENT_TARGET=10.9    /Applications/http://ift.tt/17KlLIW -arch x86_64 -isysroot /Applications/http://ift.tt/1rXiAEi -L/Users/jaredjensen/Library/Developer/Xcode/DerivedData/CS20A-fztpdusrphrftebtzmkhjpjcduym/Build/Products/Debug -F/Users/jaredjensen/Library/Developer/Xcode/DerivedData/CS20A-fztpdusrphrftebtzmkhjpjcduym/Build/Products/Debug -filelist /Users/jaredjensen/Library/Developer/Xcode/DerivedData/CS20A-fztpdusrphrftebtzmkhjpjcduym/Build/Intermediates/CS20A.build/Debug/CS20A.build/Objects-normal/x86_64/CS20A.LinkFileList -mmacosx-version-min=10.9 -stdlib=libc++ -Xlinker -dependency_info -Xlinker /Users/jaredjensen/Library/Developer/Xcode/DerivedData/CS20A-fztpdusrphrftebtzmkhjpjcduym/Build/Intermediates/CS20A.build/Debug/CS20A.build/Objects-normal/x86_64/CS20A_dependency_info.dat -o /Users/jaredjensen/Library/Developer/Xcode/DerivedData/CS20A-fztpdusrphrftebtzmkhjpjcduym/Build/Products/Debug/CS20A

Undefined symbols for architecture x86_64:
"cs20a::operator==(cs20a::ThreeDPoint const&, cs20a::ThreeDPoint const&)", referenced from:
  _main in main.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

I've tried everything and I can't fix this error. This only happens when I make the operators friend functions. If I make them members, and delete one of the parameters, it works fine. Is there something I'm doing wrong?

How can I get the number of instructions of a program?

I have written and cross compiled a small c++ program, and I could run it in an ARM or a PC. Since ARM and a PC have different instruction set architectures, I wanna to compare them. Is that possible for me to get the number of instructions in this c++ program for both ISAs?

Error when adding typedef in header

I'm new to C++, and I'm learning from Accelerated C++ (for anyone with the book, I'm trying to run the program described in §7.4)

The program I'm looking at uses some typedefs - I gather that if I add these to a header file, any source file which includes that header will be able to use the typedefs too.

My header is:

#ifndef READ_GRAMMAR_H_INCLUDED
#define READ_GRAMMAR_H_INCLUDED

typedef std::vector<std::string> Rule;
typedef std::vector<Rule> Rule_collection;
typedef std::map<std::string, Rule_collection> Grammar;

Grammar read_grammar(std::istream& in);

#endif // READ_GRAMMAR_H_INCLUDED

This is giving me the error error: 'map' in namespace 'std' does not name a type

If I change the third typedef to typedef std::vector<Rule_collection> Grammar; (not that I want this, just for example) it builds with no errors.

Any idea what the problem is? I have no idea whether I'm doing something trivial the wrong way, or whether the whole approach is incorrect

PHP Host Gator outgoing ports blocked, now what do I do?

I have a C++ Server which I give out to users who wish to host (it's a game server).

When a C++ Server goes online, it posts to a PHP script on my Host Gator account. This PHP script then tries to connect to the C++ Server and wait for a response. If it gets a response, then that C++ Server is put into the Master Server List via MySQL. I connect to the C++ Server from PHP using socket_connect().

Unfortunately, I got a Connection Refused (111) error on PHP. That's weird I thought, the C++ Server port IS open (tested using www.canyouseeme.org, it's open).

I then did a bit of research and contacted Host Gator Technical Support. They said some outgoing ports are blocked. The thing is, I'm allowing people who host the C++ Game Servers to open ANY port they want.

Host Gator will not allow all outgoing ports to be blocked.

So what do I do now? How do I check if a server can be connected to through PHP on Host Gator? I think I might be screwed. Is the only viable solution to tell my users that they have to use a specific port? What if they want to host multiple servers??

Thread unable to read memory from other thread although memory is allocated

I am using the frameworks GSound, Ogre3D and Visual Studio 13 to try to set up the basic features of GSound.

First I initialize the shared pointers (a class for pointers for multithreading given by the rim framework, gsound is based on) for my sound classes and it seems that they are successfully initalized:

sSystem = rim::Shared<SoundPropagationSystem>::construct();
sScene = rim::Shared<SoundScene>::construct();
sSource = rim::Shared<SoundSource>::construct();
sListener = rim::Shared<SoundListener>::construct();

(I also tried this with a "standard" pointer using the "new"-keyword with the same result.)

After this I give the SoundDevice a Callback-Function-Pointer, so the sound devices buffer can be updated. This function is called from another thread outside of my main-Visual Studio 13-thread. When I assign it a function, the memory is still allocated.

Then when I debug into the callback-Function itself, the visual studio debugger says that it cannot read the memory and throws: "Unhandled exception: Access violation reading location 0x0000009D."

In the following code, only the parameters are accessable and the other pointers (here only sListener is used but the others are not readable too) are not readable:

Size TutorialApplication::soundOutput(rim::sound::SoundDevice& device, SoundBuffer& buffer, Size numSamples, const Time& time)
{
    SoundListener listener = *sListener;
    numSamples = sSystem->renderListener(&listener, buffer, numSamples);
    numSamples = hdrFilter->process(buffer, buffer, numSamples);
    return numSamples;
}

I assume that this is because the memory is only viewable for my main-Thread from VS13. After googling it, I only found solutions that did not work for me.

After looking into the thread window it shows that the outputSound-thread belongs to the same process as the thread where the pointers should be initialized.

How can I divide bitsets?

I use two bitsets to store two polynomials. I want one of them to be divided by 2nd and I want to get remainder after division. For example if I would like it on the paper:

w1= 110011010000000
w2 = 1111001

                            100100100
                110011010000000 : 1111001
                1111001
                -- 1111110
    1111001
     ---- 1110000
            1111001
            ---  100100 = remainder

GlMapBuffer takes longer and longer

I have a problem with glMapBuffer. I'm making a simple program and I wanted to implement Sprite rendering. Since doing 500 draw calls for 500 sprites is really slow, I wanted to use one big VBO and update it every frame with new data. Good thing is, that it is faster. Weird thing is, that it slowes down as program is running.

I profiled it and the source of the problem is that calling glMapbuffer takes longer and longer. At first it uses few percents of one tick of the program, but after like a minute its already 36%. The profiler(VS community) just points me to nvoglv32.DLL, but that's dead end to me.

I seriously have no idea how to fix it.

Here is code of Sprite renderer: http://ift.tt/1IBzOz1 (when I copied it here, it looked really weird) and header file: http://ift.tt/1Hmjrd4

The are other things, but you can ignore them, they don't look like they cause the problem.

I initialize it, then in the main loop I basicly do:

renderer.Begin();

for(data)
   renderer.Submit(data.pos,data.dims,data.col);

renderer.End();

renderer.Render();

Is this correct way to implement something like this? I mean, maybe I'm forgetting some important glCall somewhere.

I let it run for 15minutes, now just mapping the buffer takes 20ms: ( I also updated my drivers, but even that did not help.

How to get window title bar (active and inactive) color?

I'm making a program and I need to paint a rectangle of the same color as the title bar.

If I try to get the color like this:

ARGB rgbActiveColor = GetSysColor(COLOR_ACTIVECAPTION);
ARGB rgbInactiveColor = GetSysColor(COLOR_INACTIVECAPTION);
rgbActiveColor |= 0xFF000000;    // Because of alpha
rgbInactiveColor |= 0xFF000000;

I get a totally different color in Windows 8. It always returns a orange or brown color instead of the actual color (let's say, blue).

Using DwmGetColorizationColor works, but the color is darker because I need to eliminate alpha. I try to do it like this:

BYTE r = ((RED * ALPHA) + (255 - ALPHA)) / 255; // R' = (R * A) + (1 - A)
BYTE g = ((GREEN * ALPHA) + (255 - ALPHA)) / 255; // G' = (G * A) + (1 - A)
BYTE b = ((BLUE * ALPHA) + (255 - ALPHA)) / 255; // B' = (B * A) + (1 - A)

So, my problems are:

  1. I don't know how I can correctly convert the return color from ARGB to RGB
  2. I don't know how to get the inactive title bar color